Learn more

ROBOTS

Conventional robots move from A to B along predefined trajectories. Difficulties arise if, for example, a part is not positioned exactly where the robot expects it to be. In the worst case scenario, the robot cannot grasp the part and the application fails.

ROBOTER + MACHINE VISION

If the robot is equipped with cameras and image processing and/or 3D cameras, the situation can be improved in some cases. However, robust automation often fails due to component variants, unsuitable surfaces (reflective or transparent objects) or varying lighting in the production environment.

ROBOTS + CogniDrive

With CogniDrive, a controller is connected between the robot and the arm, and one or more cameras are mounted on the arm. This is followed by a training phase lasting a few minutes, during which the robot is instructed on how to grip the component. Environmental influences, component variants, etc. are also taken into account, i.e. trained. Factors such as the surface quality of the component, shape tolerances, varying ambient light, etc. no longer play a role at all because the robot learns how to deal with them. After the training phase, you receive a model of the trained process, which is stored in the controller and serves as the basis for the trained movements. Learned skills can be transferred to any number of robots without any effort.

The image on the left shows an example of the arrangement of two cameras with lighting for the removal and insertion of memory modules. The two cameras are not absolutely necessary, but shorten the training phase. The camera holders are adapted to the application. No stereoscopic image pairs are generated, as in the classic case, but the images are fed individually and directly to the model training. F/T sensors for easier hand guidance during training and interchangeable systems (grip) are available as options.

The existing robot controller continues to be used and remains fully intact. Our system expands the existing infrastructure by using modern camera technology and an intelligent AI controller. These components enable more precise detection of the environment and advanced control of the robot.

It is important that all existing safety mechanisms of the robot remain unchanged. The AI controller acts as a higher-level control unit and communicates directly with the robot controller by transmitting specific commands. This ensures seamless integration into existing systems without compromising their safety logic.

For users who do not yet have their own robot, we also offer the option of a complete system including robot, camera unit and AI controller. This provides you with a ready-to-use solution from a single source.

A model is trained by moving in a spiral around the target point. The images taken by the camera(s) serve as input. The recording time is a few minutes (shown in the video in a very shortened form). Training can be performed either manually or automatically. Manual training requires a force/torque sensor. In the video shown, a model is trained automatically, which enables the robot to insert a SIMM module into the socket of the mainboard. After successful training, the robot is able to insert the SIMM module into the socket even if the mainboard is twisted or the module has been gripped out of position.