The Kaa logging subsystem is designed to collect records (logs) of pre-configured structure, periodically deliver them from endpoints to Operation servers, and persist in the server for further processing, or submit to immediate stream analytics. The Kaa logs structure is determined by the schema that is configurable. The framework automatically does the following:
- Generates the logging model and related API calls in the endpoint SDK;
- Enforces data integrity and validity;
- Efficiently delivers the logs to Operations servers;
- Saves the logs contents into the storage as implemented by the log appender(s) configured for the application.
The application developer is responsible for designing the log data schema and invoking the endpoint logging API from the client application.
Log records schema
The log records schema is fully compatible with the Apache Avro schema. There is one log records schema defined for a Kaa application, and it supports versioning. Whenever a new log records schema is configured into the Kaa server, it gets a new sequence version assigned. Kaa server maintains backwards compatibility with the older versions of the log records schema to support the clients that are not yet upgraded at any given moment.
See examples below for illustrations of basic log schemas.
The simplest definition of a log with no data fields (mostly useless):
Simple schema with a log level, tag, and message:
Log appenders are responsible for writing logs received by the Operations server to specific storages. For further information, see Log appenders.