-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/logging #49
Feature/logging #49
Conversation
community/Logging.md
Outdated
running. | ||
|
||
Let's assume that our pod name is `service1-75576df8cd-zjpg7`, then from the **Discover** page, in the search bar enter | ||
`kubernetes.pod_name:service1`. This filters the log data for Pods named `service`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a typo in: .... for Pods named 'service', I guess it's 'service1'?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll fix it in a final commit. I don't push this commit, while the history of our discussion will be destroyed by my push.
# deploy ELK stack | ||
$(kubectl apply -f ./elk/namespace.yaml) | ||
echo "kubectl apply -f elk/" | ||
echo "Deploying Elasticsearch stack with 3 nodes ..." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do these 3 nodes refer to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, 3 replicas of elastic search pods?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, 3 replicas of elastic search pods?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Weird, I don't know why some of my comments got duplicated :S
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, let's call it data node!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added few comments...
# deploy ELK stack | ||
$(kubectl apply -f ./elk/namespace.yaml) | ||
echo "kubectl apply -f elk/" | ||
echo "Deploying Elasticsearch stack with 3 nodes ..." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, 3 replicas of elastic search pods?
To browse kibana, just copy the following address or replace the public ip of the node with the <node-ip>: | ||
http://<node-ip>:30000 | ||
or | ||
http://192.168.1.2:30000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we keep this local IP 192.168... in this readme?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, we print the something similar to
the <node-ip>:
http://<node-ip>:30000
or
http://an-ip-address:30000
the <node-ip>
is exactly print as <node-ip>
.
Then an-ip-address
is really an IP address. It is an example like the way we have for service name.
generator/deploy.sh
Outdated
# Deploy the microservices to clusters | ||
for d in ./k8s/*; do | ||
echo "applying deployment manifests to ${d##./k8s/}" | ||
[[ -d "$d" ]] && kubectl apply --prune -f k8s/${d##./k8s/} -l version=${d##./k8s/} --context ${d##./k8s/} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will get an elasticsearch deployment per cluster, each of them with 3 replicas, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, but this part of the code (line 47 to 50) is actually responsible for deploying services.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix: at the current point, we only have elasticsearch deployment on a single cluster. We should discuss it if we need an elastic search per cluster or not.
type FileConfig struct { | ||
ClusterLatencies []ClusterLatency `json:"cluster_latencies"` | ||
Services []Service `json:"services"` | ||
Settings Setting `json:"settings,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if you want a variable of type "struct" to be optional with omitempty, then you need to define here a pointer to it instead. See: https://www.sohamkamani.com/golang/omitempty/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's right. We are going to pass this setting in configmap for the microservices. In a microservice, we are going to parse it in the run.sh
script using jq
. We need the logging
to be there, either with true
value or false
value. So when we set an struct with omitempty
if we pass nothing there, it will convert it to a false
value.
But if we consider it without omitempty
and our enduser doesn't pass this value in the input file, then our generator is looking for such thing which is not passed and raise an error and does not generate anything!
The fastest way for me to develop it, was to set it as omitempty. Do you think that we should focus more on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok... let's then leave it as it is.
# deploy ELK stack | ||
$(kubectl apply -f ./elk/namespace.yaml) | ||
echo "kubectl apply -f elk/" | ||
echo "Deploying Elasticsearch stack with 3 nodes ..." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, 3 replicas of elastic search pods?
# deploy ELK stack | ||
$(kubectl apply -f ./elk/namespace.yaml) | ||
echo "kubectl apply -f elk/" | ||
echo "Deploying Elasticsearch stack with 3 nodes ..." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Weird, I don't know why some of my comments got duplicated :S
type FileConfig struct { | ||
ClusterLatencies []ClusterLatency `json:"cluster_latencies"` | ||
Services []Service `json:"services"` | ||
Settings Setting `json:"settings,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok... let's then leave it as it is.
Ok then, shall we merge it? |
This closes #39 .
This PR is suggesting using Elasticsearch as a logging solution.
The proof of concept is described in the
Logging.md
file.