It is advised that you build your own copy of the Kaa IoT platform if any customizations were made. If you're using a non-modified, plain vanilla Kaa release, you can simply use the official installation packages, which you can obtain from the Kaa website. Please refer to the Installation guide for further instructions.
For now, deleting applications and tenants is not allowed. However, this option will be added in future releases, see KAA-580.
For now, the user is not able to delete any SDK affecting schemas (excluding client-side and server-side endpoint profile schemas). This is caused by the fact that such a schema may already be used in some SDK and, therefore, might affect the behavior of the client application. Although this helps to ensure that all generated SDKs are valid, we understand that this is a rather strict - and sometimes inconvenient - requirement.
To refactor this behavior, we are introducing the concept of SDK profiles and Common Type Library (CTL). The user will be able to delete data structures that are not included in any of the profiles without warnings. The user will also be able to delete an SDK profile (need to be sure that you don't have or don't care about affected endpoints that use this profile). Currently, only the client- and the server-side profile schemas support CTL, the rest of the schema types - in future releases.
Kaa doesn't provide any API to fetch data from databases because any data analysis is specific for unique case and so you need to implement your own application for such a purpose.
Getting data from the Kaa server is quite easy and depends on the type of the Kaa Log appender that you have used. For example, if you’re using MongoDB or Cassandra Log appenders, you are able to specify the database name (MongoDB) or a keyspace (Cassandra) and once data is in db you can query it from your web application by using an appropriate database driver.
See our Real-time IoT data analytics and visualization with Kaa, Apache Cassandra, and Apache Zeppelin and Time series IoT data ingestion into Cassandra using Kaa IoT webinar records for more details and also check out “Zeppelin data analytics” demo in Sandbox.
There are some other options, like REST API Log appender. You can invoke PUT/POST REST API calls to your web application directly from Kaa. The request body will contain data from the device. However, this is definitely not the best way from the performance point of view.
Also, we are exploring options for an auto-generated REST interface for fetching telemetry data as one of the plugins in Kaa 1.0 Banana Beach.
You can simulate your device by simply wrapping the Kaa SDK with the main function.
For example, if you need to upload data from a temperature sensor, just use the Data collection demo (NOTE that source files for demos are placed in the “source” folder NOT in “src” and schemas that were used - in the “resources” folder) with your log schema.
To try it, run the Kaa Sandbox and read the deployment instructions for this demo.
The minimum hardware requirements for the C SDK are about 10 kB RAM and 40 kB ROM.
The Java SDK requires close to 70 MB per JVM instance.
The minimum requirements for the C++ SDK are somewhere between the Java and the C SDK's requirements.
We are in the middle of optimizing our C SDK in order to make it extremely portable, small and fast. At some point, we expect to run our applications on low-cost 8/16 MCUs and on Arduino as well.
Currently, the Control service embeds a list of available Bootstrap services into the SDK (using a properties file for Java implementation, a header file for C++, etc.) during the SDK generation, and the SDK doesn't provide an API to override that list, so you can't change it.
Currently, if you need to change the Bootstrap server host - you need to regenerate the SDK.
For production, we recommend that you use DNS names that map to IP addresses of concrete nodes running the Bootstrap services so this will allow to manage Bootstrap servers IP addresses and help to avoid SDK regeneration.
In future releases, we are planning to implement functionality that will address this issue in a comprehensive way (maybe even in 1.0.0).
You need to create a new Kaa application in the production cluster, import all schemas that you used on the test server, create a new SDK profile and generate a new SDK.
The best way is to provide all schemas next to the source code, covered by some version control system (VCS). Here's one of our sample app’s (Storm analytics demo) resources folder. It contains all the schemes used by the application, so importing to a different Kaa environments can be done easily.
Also, we are working on concept of endpoints migration across Kaa clusters (from development to production clusters) without reflashing endpoints credentials and SDK regeneration, see KAA-124 and KAA-930.
Kaa allows aggregating endpoints into endpoint groups based on criteria defined by profile filter. You can create a filter for Client-side or Server-side endpoint profiles properties or even EndpointKeyHash. Note that for criteria definitions profile filters use the Spring Expression Language (SpEL). For example, to group all endpoints whose client-side endpoint profile’s field "foo" equals to "bar", you simply use the expression "#cp.foo == 'bar'" as a filter.
Another way is to create an endpoint group and create a profile filter that will match only single endpoint. You just have to make sure that no other endpoint is added to the group thereafter, so it contains the initial endpoint only.
However, in some cases you can still manage individual configurations without creating a gazillion of groups. For example, let's say you're managing traffic lights from your cloud, and for simplicity there are just three states: green, yellow and red. From the first glance it may seem that individual configurations are necessary in this case, but you can also approach the entire problem differently:
Managing individual configurations is something we are considering for post-1.0.0 plugins implementation.
Only C SDK has a platform-dependent part so first of all you need to port the Kaa SDK on your platform. Currently there is no full description how to port the Kaa C SDK on a specific platform, but we are working on this item. Thus I am going to describe general steps here.
Kaa C SDK doesn't need any OS.
To use the Kaa C SDK on a specific platform, you must implement the following routines:
Optional (it means you can provide empty implementation):
This page describes configuration parameters for building the Kaa C SDK on various platforms.
Also, you can find here the source code of a demo application for these platforms.
Second - if you are using a protocol which is not yet officially supported, you can implement a custom transport.
The alternative option is when you integrate the Kaa SDK into a gateway (we call it "actor gateway"). The actor gateway instantiates an endpoint actor per each physical device. The actor handles communication with the actual device and presents itself to the cloud as a virtual representation of the sensor, etc.
The actor gateway can be effectively used when you can’t install the Kaa SDK on your device or you need to use some sophisticated protocol for device-server connectivity.
Communication security in Kaa is handled with hybrid RSA-2048 and AES-128/256 algorithms at the transport level.
Currently Kaa supports only one log schema per SDK profile. This functionality will be improved in future releases (see KAA-624).
As for now, the best way to emulate the feature is to merge all of your schemas into a single one. For example, you have a device that sends log records containing measurement data from two sensors - temperature and humidity, and you want to collect this data in a different tables or even in a different databases. So let’s consider the following schemas:
The temperature sensor log schema:
The humidity sensor log schema:
The resulting log schema will consist of one union field (Measurements) of TemperatureData and HumidityData schemas and will look like this:
And when your endpoint need to generate a new log record you will be able to chose one of schemas, for temperature measurements - TemperatureData and for humidity - HumidityData.
You even can implement your own custom Log appender if your use case includes some specific logic of log processing.