Direct instruction is made possible using a motion capture device, synchronizing the robot’s and operator’s arm movements. During caring duties, impedance control ensures appropriate force application by enabling flexible motion execution. The EIPL architecture, based on the deep learning model, forecasts future events while reducing errors. A convolutional autoencoder processes RGB camera image input and extracts important spatial attention points.
According to researchers, the Selective Kernel Network (SKNet) is applied for joint angle and torque attention, dynamically adjusting feature importance. The model predicts joint movements, sending commands to the impedance controller, ensuring precise and adaptive caregiving motions.
Still in the testing phase, the team estimates that AIREC will only be ready for nursing care or medical facilities by 2030. The initial price of the robot is expected to be at least ¥10 million ($67,000).