RM2 Platform

The platform provides programmers in robotics space to easily integrate sensory data for learning. The outputs from the learning can be integrated back to motor parts to facilitate seamless automation. The platform comes with a self learning engine built over a self organizing map and requires no intervention from humans


As the self learning engine can directly learn from the input data from the sensors, robotic engineers need to simply plug their sensory data to the platform and the machine is all set to learn in the external environment. This link to an infographic illustrates how the platform acts as the neural processing unit for machines and the various sensors that can be integrated to the platform architecture.

.

The two main aspects of the platform is its architecture and the model for processing information. These two aspects define the capabilities of the self learning feature.

Architecture:
The hierarchical semantic network architecture is designed on the basic of neural signaling hierarchy and plays a significant role in allowing the learning model to achieve outputs in minimum number of steps. Click here to read more.

Information Processing:
The processing routine drives its learning by extracting differences and similarities from the patterns and ranking unit strings within the pattern. The patterns are formed based on the hierarchy of the architecture. Click here to read more on how processing takes place using strings.


The platform is so configurable that users will not require traditional AI programming knowledge. Interactive interfaces are provided to control and manage data, rules and decisions right from their desktop. Users can query object nodes and can view object associations and their weights to predict what might be the possible next actions of the machine. Users can even manage weight threshold so we know that the super intelligent machine can be completely supervised, in the case of an unwarranted scenario.

The following aspects allow users to manage the machine

Data Model:
An intuitive data relationship graph will allow in editing and updating data relationships and weights between entities using a simple drag and drop feature.

Connectors:
The platform will allow engineers to integrate any new sensor data and add them as nodes in the data model. The platform automatically relates the data to the associated object nodes, which in turn is consumed for data processing.

Rules:
Just a basic set of rules is what an unsupervised machine needs. To keep the machine under supervision, all you need to do is to add more rules. The platform allows the user to set rules and weights to any data parameters and blinker its learnings to a set grid.

Monitor:
To keep a constant check on the machine's decisions, users can query entities or labels to understand the weights for a given relationship which will reveal the next actions that will be taken by the machine

Web Learning:
A separate module can be plugged to learn from available information on the web using the same extraction and assembly patterns available on the platform. This will enable the robot to make more profound decisions using implicit learning.

Not yet. We are 24 months away to release our first autonomous version. Currently, we are researching on machine vision models for input extraction, spatial separation for sound inputs and integration of string patterns.