Logging

Logging

DC/OS cluster nodes generate logs that contain diagnostic and status information for DC/OS core components and DC/OS services.

Service and Task Logs

If you’re running something on top of DC/OS, you can get started right away by running this DC/OS CLI command:

dcos task log --follow my-service-name

For more information about accessing your logs, see the service and task logs documentation.

System Logs

You can find which components are unhealthy in the DC/OS UI on the System tab.

system health

You can also aggregate your system logs by using ELK and Splunk. See our ELK and Splunk tutorials to get started.

All of the DC/OS components use systemd-journald to store their logs. To access the DC/OS core component logs, SSH into a node and run this command to see all logs:

journalctl -u "dcos-*" -b

You can also view the logs for specific components by entering the component name:

Admin Router

journalctl -u dcos-nginx -b

DC/OS Marathon

journalctl -u dcos-marathon -b

gen-resolvconf

journalctl -u dcos-gen-resolvconf -b

Mesos master node

journalctl -u dcos-mesos-master -b

Mesos agent node

journalctl -u dcos-mesos-slave -b

Mesos DNS

journalctl -u dcos-mesos-dns -b

ZooKeeper

journalctl -u dcos-exhibitor -b

Next Steps


Service and Task Logging

As soon as you move from one machine to many, accessing and aggregating logs becomes difficult. Once you hit a certain scale, keeping these logs and making them available to others can add massive overhead to your cluster. After watching how users interact with their logs, we’ve scoped the problem to two primary use cases. This allows you to pick the solution with the lowest overhead that solves your specific problem.

    Log Management in DC/OS with ELK

    You can pipe system and application logs from the nodes in a DC/OS cluster to your existing ElasticSearch, Logstash, and Kibana (ELK) server. This document describes how to store all unfiltered logs directly on ElasticSearch, and then perform filtering and specialized querying on ElasticSearch directly. The Filebeat output from each node is sent directly to a centralized ElasticSearch instance, without using Logstash. If you’re interested in using Logstash for log processing or parsing, consult the Filebeat and Logstash documentation.

      Filtering ELK

      The file system paths of DC/OS task logs contain information such as the agent ID, framework ID, and executor ID. You can use this information to filter the log output for specific tasks, applications, or agents.

        Splunk

        You can pipe system and application logs from a DC/OS cluster to your existing Splunk server.

          Filtering Splunk

          The file system paths of DC/OS task logs contain information such as the agent ID, framework ID, and executor ID. You can use this information to filter the log output for specific tasks, applications, or agents.