Learn more

ROBOT

Conventional robots move from A to B along predefined trajectories. Difficulties arise if, for example, a part is not positioned exactly where the robot expects it to be. Depending on the application, in the worst case the robot cannot grip the part and the application fails.

ROBOTS WITH MACHINE VISION

If the robot is equipped with cameras and image processing and/or 3D cameras, the situation can be improved in some cases. However, robust automation often fails due to component variants, unsuitable surfaces (reflective or transparent objects) or varying lighting in the production environment.

ROBOT + COGNIDRIVE

With CogniDrive, a controller is connected between the robot and the arm, and one or more cameras are mounted on the arm. A training phase follows (typically approx. 10 minutes) in which the robot is instructed how to grip the component. Environmental influences and component variants are also taken into account, i.e. also trained. Factors such as the surface quality of the component, shape tolerances, varying ambient light, etc. are irrelevant because the robot learns how to deal with them. After the training phase, you receive a model of the trained process, which is stored in the controller and serves as the basis for the trained movements. Learned skills can be transferred to any number of robots.

The picture on the left shows an example of the arrangement of 2 cameras with lighting for the removal and insertion of memory modules. The two cameras are not absolutely necessary, but shorten the training times. The camera holders are adapted to the application. Instead of generating stereoscopic image pairs as in the classic case, the images are fed individually and directly to the model training. F/T sensors for easier hand guidance during training and interchangeable systems (grip) are optionally available.

The existing robot controller continues to be used and remains fully intact. Our system expands the existing infrastructure by using modern camera technology and an intelligent AI controller. These components enable more precise detection of the environment and advanced control of the robot.

It is important that all of the robot's existing safety mechanisms remain unchanged. The AI controller acts as a higher-level control unit and communicates directly with the robot controller by transmitting specific commands. This ensures seamless integration into existing systems without compromising their safety logic.

For users who do not yet have their own robot, we also offer the option of supplying a complete system including robot, camera unit and AI controller. This provides customers with a ready-to-use solution from a single source.

A model is trained by moving it in a spiral around the target point. The images taken by the camera or several cameras serve as input. The recording time is a few minutes (shown in a very shortened form in the video). Training can be performed either manually or automatically. A force/torque sensor is required for manual training.  In the video shown, a model is automatically trained that enables the robot to insert a SIMM module into the socket of the mainboard. After successful training, the robot is able to insert the SIMM module into the socket even if the mainboard is twisted or the module has been gripped out of position.