A More Efficient Log Management System

Ruijie Jin, Zixin Yao

Log files are the primary data source for network observability and they contain records of all events including operations within systems, applications, software or server, thus, it is essential to monitor these files in order to protect them from outside attacks. Nevertheless, there are tons of log files generated by the computer every day, and the primary way to secure these files right now is by log management, the practice of continuously gathering, storing and analyzing log files from applications. Log management can help us to identify technical issues when we are unable to locate where the problem is, also strengthen the security because it can determine unusual activities and notify users. 

One of the most popular log file management stacks used worldwide is the ELK stack. It stands for open source projects: Elasticsearch, Logstash and Kibana; Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch. And Kibana lets users visualize data with charts and graphs in Elasticsearch. However, there are some major problems with the ELK stack while monitoring logging, first, Logstash is not scalable which creates files collision and leads to file loss. Also, there might be log files loss during the process of Logstash. Last, unlike the existing log management platforms, the ELK stack is not a real-time management system so it cannot detect and notify users of potential attacks. 

As mentioned above, even though there exist some crucial problems with the ELK stack, it is still one of the most used open-source software for log management, thus, we proposed some improvements based on the ELK stack. Specifically on the efficiency of log monitoring.

[1] https://www.elastic.co/what-is/elk-stack