If you have been reading along it is my final part of this short story on cleaning up messy logs. I might write one on Kibana itself some day, as it is a difficult tool to use for a beginners. Previous parts of this story can be found here: part 1 and part 2.
This last couple of days at work I could spent more time on fixing logs as new project estimates has finished. Last thing I sorted was
syslog parsing, with that done I could think on figuring out how to log tracebacks raised in our services.
Tracebacks are spread over many lines and when they arrive to
logstash they are parsed as separate entries and then sent to
elasticsearch like wise. So
Kibana displays them as separate as well making it really difficult to read. In addition of being separate they are in reverse order.
My idea is to serialize each log message into JSON and then parse it in
logstash. As I have probably mentioned I am working on an app using microservices pattern and there is a few services to be done. First candidate is the gateway service, as it’s easiest to test and relatively simple with no additional components beside the service. It is
flask app running on
gunicorn so to configure logging there is no need to change the code but rather your entrypoint by adding
#!/bin/bash exec gunicorn -b 0.0.0.0:80 --log-config /code/log-config.ini --access-logfile - --reload "gate.app:create_app()"
Config file should be written in ini style.
[loggers] keys=root [handlers] keys=console [formatters] keys=default [logger_root] level=DEBUG qualname=root handlers=console [handler_console] class=StreamHandler formatter=default args=(sys.stdout, ) [formatter_default] class=logstash_formatter.LogstashFormatter
This alone handles almost whole application configuration, logs are nicely serialized into JSON with whole traceback.