About |
What is Filebeat and why is it imperative?
The beats are open-source and lightweight data shippers. You can install these as agents on your servers to dispatch operational data to Elasticsearch. You can send data directly via beats to Elasticsearch or via Logstash. Here (in Logstash) you can further process and develop the data.
Filebeat creates a low memory footprint to forward and centralize logs and files and you don’t need to use SSH especially when you have numerous servers, virtual machines, and containers that create logs.
Filebeat is a logging agent. You can install it on the machines that create the log files. Filebeat forwards the data to Logstash or directly into Elasticsearch for indexing.
Filebeat Processors
If you are not using Logstash but still want to process/customize the logs before sending them to ElasticSearch, you can use the Filebeat Processors. You can decode the JSON strings, add various metadata (e.g. Docker, Kubernetes), drop specific fields, and more.
Talk with our experts : https://aaic.cc/g0ip
If you want to send logs from Filebeat directly to ElasticSearch this provides benefits such as
1. We were using logstash for parsing the logs to elasticsearch. But, due to heavy traffic, logs were getting lost and delayed on the elasticsearch. We have modified the workflow and parsed the logs from filebeat to elasticsearch. Now, We are getting the logs real-time and there is no delay in the logs coming to elasticsearch.
The below Image shows the heavy traffic(6 Million Hits in 15 Minutes) coming from pods and filebeat is shipping the data in minimal time.
So, The filebeat can handle heavy traffic.
1. It can support encryption
2. Decreases the latency involved with the log processing by a middle component like Logstash.
Logstash performance works on a few factors, e.g.
o The number & complexity of filters you used
o The number of filters, output workers & available system resources.
How do you tune your internal pipeline changes over recent versions? Hence offering your config and versions of Logstash helps you with how you tune it in the best possible way. Besides that, Logstash processing will be limited by the throughput of the slowest output.
Connect with us: https://aaic.cc/03fq
4. Cost Reduction
If you ship the logs from filebeat to elasticsearch, you will save the cost of resources consumed by the Logstash pods.
5. Logs will get reflected in ElasticSearch in real-time.
The below Image shows the pod logs timestamp
As shown in the below screenshot, At the same time a log was found on Kibana.
6. There is no loss of logs.
When we were passing the logs through the logstash to Elasticsearch. Logs count on the microservices pod and logs in the Kibana were not matching. We noticed only 20% of the microservices pod logs were there on Kibana. After modifying the workflow, Logs count was matching.
Processors
You can define processors in the Filebeat configuration file per input. To define a processor:
• you define the processor name
• an optional condition
• a set of parameters
Where:
• specifies a processor that accomplishes some kind of action, for example choosing the fields that are already exported or adding metadata to the event.
• specifies an optional condition. If the condition is present, then the action is executed only if the condition is met. If no condition is set, then the action is always executed.
• is the list of parameters to pass to the processor.
Let’s take an example where we have 2 APIs ( vehicle and furniture APIs) and each API has multiple microservices.
These microservices are deployed in the Kubernetes cluster as Deployment.
We use Filebeat Autodiscover to fetch logs of pods.
Decode logs are structured as JSON messages using JSON Options.
Add kubernetes metadata into the log so that we can add fields based on the Pod label.
Pod labels will be present under kubernetes.labels field, e.g. app label is present under kubernetes.labels.app.
• Use the add_fields processor to add fields such as api_name and microservice_name.
• Use the conditional to specify when to add these fields.
• To add fields at the top-level, set the target to an empty string.
• Use the drop_fields processor to remove unwanted fields.
Filebeat output
Finally, send logs to ElasticSearch using the Output.
Read the blog in detail: https://aaic.cc/78gc