Here we describe how you can setup your logging and monitoring for kubernetes keeping in mind dashboard design also.
Logging
Logging mainly divide into three parts
Logs collector
fluentd
Elasticsearch Beats (like filebeat for logs)
fluent bit (light version of fluentd written in C for performance and low resource utilization)
Telegraf (part of influxdb project)
Logs storage
Elasticsearch (best for text based searching)
filesystem (if you won't interested in text based search)
InfluxDB (influxDB also able to store logs, but I am not so much aware)
Logs dashboard
Kibana (Kibana is well known ELK stack project and well works with elasticsearch database, but the problem is it uses lots of RAM, you can use logtrail plugin for showing logs in tail form based on some filters)
Here we are going to use Elasticsearch, fluent bit, and Kibana for logging solution for kubernetes.
Fluent bit/Fluentd
For collecting all logs you need to run fluent bit/fluentd as a DaemonSet and collect all files from folder /var/log/containers. You can also use helm chart for the deployment of this.
Here we can also create new index of each namespace of kubernetes, this easily the filtering little bit on logtrail plugin screen
output.conf: |-
<match **>
type elasticsearch_dynamic
log_level info
include_tag_key true
host elasticsearch-logging
port 9200
logstash_format true
logstash_prefix namespace-${record['kubernetes']['namespace_name']}
# Set the chunk limits.
buffer_chunk_limit 2M
buffer_queue_limit 8
flush_interval 5s
# Never wait longer than 5 minutes between retries.
max_retry_wait 30
# Disable the limit on the number of retries (retry forever).
disable_retry_limit
# Use multiple threads for processing.
num_threads 2
</match>
In above you can see that I am using logstash_prefix which is used generate dynamic index in elasticsearch, you can also use your own pattern, this will help you filter in settings of logtrail plugin
Use any standard way to install fluent bit/fluentd on your kubernetes, for e.g., use below helm charts
Elasticsearch
Kibana
You can install default kibana either on Kubernetes or on any machine from where it can access elasticsearch
Install elasticsearch on your own, either use helm chart () and just use any VMs to install it, you can find many tutorial to configure and optimize it.
You can use logtail() plugin of kibana for that, and use below logtrail.json