Kaa releases
Shortcuts
Skip to end of metadata
Go to start of metadata

The Flume log appender encapsulates received logs into Flume events and sends these events to external Flume sources via Avro RPC.

Creating Flume log appender in Admin UI

The easiest way to create a Flume log appender for your application is by using Admin UI.

Creating Flume log appender with REST API

It is also possible to create a Flume log appender for your application by using REST API. The following example illustrates how to provision the Flume log appender via REST API.

Formats

The Flume log appender can be configured to produce flume events using either Records container or Generic format.

Records container

In case of the Records container format, log records are serialized by the following RecordData schema as a binary Avro file and stored in a Flume event raw body.

The RecordData schema has the following four fields.

  • recordHeader
  • schemaVersion
  • applicationToken
  • eventRecords

The recordHeader field stores a set of log metadata fields.

The eventRecords field stores an array of raw records. Each element of the array is a log record in the Avro binary format serialized by the log schema.

The schemaVersion and applicationToken fields should be used as parameters of a REST API call to Kaa in order to obtain the logs Avro schema for eventRecords and enable parsing of the binary data.

 Click here to expand...

 

 

 

 

 

 

 

Generic

In case of the Generic format, every log record is represented as a separate Flume event. The Flume event body contains a log record serialized by the log schema in the Avro binary format. The Flume event header contains the log schema definition mapped by the flume.avro.schema.literal key.

In addition, Kaa provides the following two extended Flume agents which can be used together with the Flume log appender.

  • Kaa flume source
  • Kaa flume sink

The Kaa flume source is a Flume agent with the extension to the standard Flume NG Avro Sink that includes additional features and performance improvements. The Kaa flume source receives data from the Flume log appender and delivers it to an external Avro Source located in a Hadoop cluster.

The Kaa flume sink is a Flume agent with the extension to the standard Flume NG HDFS Sink that includes additional features. The Kaa flume sink is aware of the log records data schema and stores log data into HDFS as Avro Sequence files using the following Avro schema.

 Click here to expand...

 

 

 


 ${record_data_schema} - is a variable which is substituted at run time by Kaa HDFS Sink with the Avro schema of the actual logs. This Avro schema is obtained via a REST API call to Kaa.

Configuration

The Flume log appender configuration should match the following Avro schema.

 Click here to expand...
namedescription
executorThreadPoolSize
Executor thread pool size
callbackThreadPoolSize
Callback thread pool size
clientsThreadPoolSize
RPC client max thread pool 
includeClientProfile
Client profile data(boolean value)
includeServerProfile
Server profile data(boolean value)
flumeEventFormat
Records container or Generic
hostsBalancing
Prioritized or Round Robin
FlumeNodes
Flume nodes

 

 

 Click here to expand...

The following configuration example matches the previous schema.

 

NOTE

Flume log appenders can have either prioritized or round robin host balancing.

  • For the prioritized host balancing, every flume node record should have a host address, port and priority. The highest priority is 1. When choosing a server to which to save logs, an endpoint will send requests to the servers starting from the server with the highest priority.
  • For the round robin host balancing, every flume node record should have a host address and port. When choosing a server to which to save logs, an endpoint will send requests to the servers according to the round robin algorithm.
  • You can include client/server profile into persisted data via corresponding check-boxes.

Administration

The following REST API call example illustrates how to create a new Flume log appender. 

 Example result

 

 

If you want to use Flume agents together with the Flume log appender, create necessary Kaa Flume agents as described in Installing Kaa flume agents.

Setting up Flume log appender

  1. As a tenant admin, go to your application >> Log appenders, then click Add log appender


  2. In the Add log appender window that opens, fill in the required fields 
    In our example, we use Flume as Name.  
    In the Type drop-down list, select Flume.
    Then, specify the fields Flume event format (we selected Records container), and Hosts balancing (we selected prioritized).
    Finally, specify the cluster parameters: host, port and priority. We use localhost:7070. (in our case localhost = 10.2.3.93)





  3. To finish, click Add at the top of the window.
    In case of success, you will see your new log appender in the log appenders list.


 

After that you can go to Data collection demos in Sandbox.

Run the application using the following command in the console:

 

After this command you will see

 

 

This logs you can find in HDFS path which you indicate when set up kaa-flume sink



Copyright © 2014-2015, CyberVision, Inc.

  • No labels