Kaa releases
Shortcuts
Skip to end of metadata
Go to start of metadata

A log appender is a service utility which resides on the Operations server. This utility is responsible for writing logs (received by the Operations server from endpoints) to a single specific storage, as defined by the log appender's type. Each Kaa application may use only one log appender at a time. A Kaa developer is able to add, update and delete log appenders using Admin UI or REST API. Kaa provides several default implementations of log appenders. It is also possible to create custom log appenders.

Default log appenders

There are several default log appender implementations that are available out of the box for each Kaa installation. This page contains general information about architecture, configuration and administration of the default log appenders.

File system log appender

The file system log appender stores received logs into the local file system of the Operations server. This log appender may be used for test purposes or in pair with tools like Flume and others. Logs are stored in files under the /$logsRootPath/tenant_$tenantId/application_$applicationId folder, where logsRootPath is a configuration parameter, tenantId and applicationId  are ids of the current tenant and application respectively. Access to the logs is controlled via Linux file system permissions.

You can log in to the Operations server host and browse logs using the kaa_log_user_$applicationToken user name and the pubic key which is created as a part of the configuration.

Configuration

The file system log appender configuration should match the following Avro schema.

The following configuration example matches the previous schema.

 

Administration

To create the file system log appender, use either Admin UI or REST API. The following REST API call example illustrates how to create a new file system log appender.

 Example result

Flume log appender

The Flume log appender encapsulates received logs into Flume events and sends these events to external Flume sources via Avro RPC.

The Flume log appender can be configured to produce flume events using either Records container or Generic format.

  • In case of the Records container format, log records are serialized by the following RecordData schema as a binary Avro file and stored in a Flume event raw body.

The RecordData schema has the following four fields.

    • recordHeader
    • schemaVersion
    • applicationToken
    • eventRecords

The recordHeader field stores a set of log metadata fields.

The eventRecords field stores an array of raw records. Each element of the array is a log record in the Avro binary format serialized by the log schema.

The schemaVersion and applicationToken fields should be used as parameters of a REST API call to Kaa in order to obtain the logs Avro schema for eventRecords and enable parsing of the binary data.

 

 

  • In case of the Generic format, every log record is represented as a separate Flume event. The Flume event body contains a log record serialized by the log schema in the Avro binary format. The Flume event header contains the log schema definition mapped by the flume.avro.schema.literal key.

In addition, Kaa provides the following two extended Flume agents which can be used together with the Flume log appender.

  • Kaa flume source
  • Kaa flume sink

The Kaa flume source is a Flume agent with the extension to the standard Flume NG Avro Sink that includes additional features and performance improvements. The Kaa flume source receives data from the Flume log appender and delivers it to an external Avro Source located in a Hadoop cluster.

The Kaa flume sink is a Flume agent with the extension to the standard Flume NG HDFS Sink that includes additional features. The Kaa flume sink is aware of the log records data schema and stores log data into HDFS as Avro Sequence files using the following Avro schema.


 ${record_data_schema} - is a variable which is substituted at run time by Kaa HDFS Sink with the Avro schema of the actual logs. This Avro schema is obtained via a REST API call to Kaa.

Configuration

The Flume log appender configuration should match the following Avro schema.

The following configuration example matches the previous schema.

NOTE

Flume log appenders can have either prioritized or round robin host balancing.

  • For the prioritized host balancing, every flume node record should have a host address, port and priority. The highest priority is 1. When choosing a server to which to save logs, an endpoint will send requests to the servers starting from the server with the highest priority.
  • For the round robin host balancing, every flume node record should have a host address and port. When choosing a server to which to save logs, an endpoint will send requests to the servers according to the round robin algorithm.

Administration

To create a Flume log appender, use either Admin UI or REST API. The following REST API call example illustrates how to create a new Flume log appender. 

 Example result

If you want to use Flume agents together with the Flume log appender, create necessary Kaa Flume agents as described in Installing Kaa flume agents.

MongoDB log appender

The MongoDB log appender is responsible for transferring logs from the Operations server to the MongoDB database. The logs are stored in the table named logs_$applicationToken, where $applicationToken matches the token of the current application.

Configuration

The MongoDB log appender configuration should match the following Avro schema.

The following configuration example matches the previous schema.

Administration

To create the MongoDB log appender, use Admin UI or REST API. The following REST API call example illustrates how to create a new MongoDB log appender.

 Example result

Cassandra log appender

The Cassandra log appender is responsible for transferring logs from the Operations server to the Cassandra database. The logs are stored in the table named logs_$applicationToken, where $applicationToken matches the token of the current application.

Configuration

The Cassandra log appender configuration should match the following Avro schema.

The following configuration example matches the previous schema.

NOTE
Before using the Cassandra log appender, you need to create Cassandra KEYSPACE with the corresponding replication factor and strategy class.

Administration

To create the Cassandra log appender, use Admin UI or REST API. The following REST API call example illustrates how to create a new Cassandra log appender.

 Example result

 

CDAP log appender

The CDAP log appender is responsible for the logs transfer to the CDAP platform. Logs are stored in a stream that is specified by the stream configuration parameter.

Configuration

The CDAP log appender configuration should match the following Avro schema.

The following configuration example matches the previous schema. 

Administration

To create a CDAP log appender, use either Admin UI or REST API. The following REST API call example illustrates how to create a new CDAP log appender. 

 Example result

Oracle NoSQL appender

The Oracle NoSQL log appender is responsible for transferring logs from the Operations server to the Oracle NoSQL key/value storage. Logs are stored in the key/value storage using the following key path:
${applicationToken}/${logSchemaVersion}/${endpointKeyHash}/${uploadTimestamp}/${counter}

where:

  • applicationToken - matches the token of the current application
  • logSchemaVersion - the version of the avro log schema used to serialize log records
  • endpointKeyHash - a key hash identifying the endpoint which produced the log record
  • uploadTimestamp - a timestamp in milliseconds when logs were uploaded to the key/value storage
  • counter - the serial number of the record

Values are stored as serialized Generic Records using record wrapper avro schema.

Configuration

The Oracle NoSQL log appender configuration should match the following Avro schema.

The following configuration example matches the previous schema.

Administration

To create an Oracle NoSQL log appender, use either Admin UI or REST API. The following REST API call example illustrates how to create a new Oracle NoSQL log appender.

 Example result

Custom log appenders

Refer to the Creating custom log appender page to learn how to implement custom log appenders.


Copyright © 2014-2015, CyberVision, Inc.

  • No labels