Related Articles
Categorizing robots by performance fitness into the tree of robots
Robots are typically classified based on specific morphological features, like their kinematic structure. However, a complex interplay between morphology and intelligence shapes how well a robot performs processes. Just as delicate surgical procedures demand high dexterity and tactile precision, manual warehouse or construction work requires strength and endurance. These process requirements necessitate robot systems that provide a level of performance fitting the process. In this work, we introduce the tree of robots as a taxonomy to bridge the gap between morphological classification and process-based performance. It classifies robots based on their fitness to perform, for example, physical interaction processes. Using 11 industrial manipulators, we constructed the first part of the tree of robots based on a carefully deduced set of metrics reflecting fundamental robot capabilities for various industrial physical interaction processes. Through significance analysis, we identified substantial differences between the systems, grouping them via an expectation-maximization algorithm to create a fitness-based robot classification that is open for contributions and accessible.
Aerodynamic effect for collision-free reactive navigation of a small quadcopter
The small footprint of tiny multirotor vehicles is advantageous for accessing tight spaces, but their limited payload and endurance impact the ability to carry powerful sensory and computing units for navigation. This article reports an aerodynamics-based strategy for a ducted rotorcraft to avoid wall collisions and explore unknown environments. The vehicle uses the minimal sensing system conventionally conceived only for hovering. The framework leverages the duct-strengthened interaction between the propeller wake and vertical surfaces. When incorporated with the flight dynamics, the derived momentum-theory-based model allows the robot to estimate the obstacle’s distance and direction without range sensors or vision. To this end, we devised a flight controller and reactive navigation methods for the robot to fly safely in unexplored environments. Flight experiments validated the detection and collision avoidance ability. The robot successfully identified and followed the wall contour to negotiate a staircase and evaded detected obstacles in proof-of-concept flights.
User-specified inverse kinematics taught in virtual reality reduce time and effort to hand-guide redundant surgical robots
Medical robots should not collide with close by obstacles during medical procedures, such as lamps, screens, or medical personnel. Redundant robots have more degrees of freedom than needed for moving endoscopic tools during surgery and can be reshaped to avoid obstacles by moving purely in the space of these additional degrees of freedom (null space). Although state-of-the-art robots allow surgeons to hand-guide endoscopic tools, reshaping the robot in null space is not intuitive for surgeons. Here we propose a learned task space control that allows surgeons to intuitively teach preferred robot configurations (shapes) that avoid obstacles using a VR-based planner in simulation. Later during surgery, surgeons control both the endoscopic tool and robot configuration (shape) with one hand. In a user study, we found that learned task space control outperformed state-of-the-art naive task space control in all the measured performance metrics (time, effort, and user-perceived effort). Our solution allowed users to intuitively interact with robots in VR and reshape robots while moving tools in medical and industrial applications.
High-density electromyography for effective gesture-based control of physically assistive mobile manipulators
High-density electromyography (HDEMG) can detect myoelectric activity as control inputs to a variety of electronically-controlled devices. Furthermore, HDEMG sensors may be built into a variety of clothing, allowing for a non-intrusive myoelectric interface that is integrated into a user’s routine. In our work, we introduce an easily-producible HDEMG device that interfaces with the control of a mobile manipulator to perform a range of household and physically assistive tasks. Mobile manipulators can operate throughout the home and are applicable for a spectrum of assistive and daily tasks in the home. We evaluate the use of real-time myoelectric gesture recognition using our device to enable precise control over the intricate mobility and manipulation functionalities of an 8 degree-of-freedom mobile manipulator. Our evaluation, involving 13 participants engaging in challenging self-care and household activities, demonstrates the potential of our wearable HDEMG system to control a mobile manipulator in the home.
Efficient data-driven joint-level calibration of cable-driven surgical robots
Accurate joint position estimation is crucial for the control of cable-driven laparoscopic surgical robots like the RAVEN-II. However, any slack and stretch in the cable can lead to errors in kinematic estimation, complicating precise control. This work proposes an efficient data-driven calibration method, requiring no additional sensors post-training. The calibration takes 8–21 min and maintains high accuracy during a 6-hour heavily loaded operating. The Deep Neural Network (DNN) model reduces errors by 76%, achieving accuracy of 0.104∘, 0.120∘, and 0.118 mm for joints 1, 2, and 3, respectively. Compared to end-to-end models, the DNN achieves better accuracy and faster convergence by correcting original inaccurate joint positions. Additionally, a linear regression model offers 160 times faster inference speed than the DNN, suitable for RAVEN’s 1000 Hz control loop, with slight compromises in accuracy. This approach should significantly enhance the accuracy of similar cable-driven robots.
Responses