Related Articles
Categorizing robots by performance fitness into the tree of robots
Robots are typically classified based on specific morphological features, like their kinematic structure. However, a complex interplay between morphology and intelligence shapes how well a robot performs processes. Just as delicate surgical procedures demand high dexterity and tactile precision, manual warehouse or construction work requires strength and endurance. These process requirements necessitate robot systems that provide a level of performance fitting the process. In this work, we introduce the tree of robots as a taxonomy to bridge the gap between morphological classification and process-based performance. It classifies robots based on their fitness to perform, for example, physical interaction processes. Using 11 industrial manipulators, we constructed the first part of the tree of robots based on a carefully deduced set of metrics reflecting fundamental robot capabilities for various industrial physical interaction processes. Through significance analysis, we identified substantial differences between the systems, grouping them via an expectation-maximization algorithm to create a fitness-based robot classification that is open for contributions and accessible.
Determinants of consumer intention to use autonomous delivery vehicles: based on the planned behavior theory and normative activation model
Autonomous delivery vehicles (ADVs) that provide contactless services have attracted much academic and practical attention in China in recent years. Despite this, there is a lack of in-depth research on what motivates customers to embrace ADVs. The study integrates the theory of planned behavior (TPB) and normative activation model (NAM) and explores how environmental factors, situational factors, and individual factors affect original TPB constructs and ultimately consumers’ intention to use ADVs. Structural equation modeling was performed on survey data of 561 Chinese consumers through an online sampling platform. The results show that among the factors affecting consumer intention, word-of-mouth recommendations have the greatest impact, followed by perceived enjoyment, COVID-19 risk, ascription of responsibility, subjective norm, attitude, and perceived behavioral control. The results not only make important theoretical contributions to the technology acceptance fields but also provide helpful references to logistics enterprises, ADVs technology providers, and policymakers.
Hollow fiber-based strain sensors with desirable modulus and sensitivity at effective deformation for dexterous electroelastomer cylindrical actuator
The electroelastomer cylindrical actuators, a typical representation of soft actuators, have recently aroused increasing interest owing to their advantages in flexibility, deformability, and spatial utilization rate. Proprioception is crucial for controlling and monitoring the shape and position of these actuators. However, most existing flexible sensors have a modulus mismatch with the actuation unit, hindering the free movement of these actuators. Herein, a low-modulus strain sensor based on laser-induced cellular graphitic flakes (CGF) onto the surface of hollow TPU fibers (HTF) is present. Through the electrostatic self-assembly technology, the flexible sensor features a unique hybrid sensing unit including soft HTF as substrate and rigid CGF as conductive path. As a result, the sensor simultaneously possesses desirable modulus (~0.155 MPa), a gauge factor of 220.3 (25% < ε < 50%), fast response/recovery behaviors (31/62 ms), and a low detection limit (0.1% strain). Integrating the sensor onto the electroelastomer cylindrical actuators enables precise measurement of deformation modes, directions, and quantity. As proof-of-concept demonstrations, a prototype soft robot with high-precision perception is successfully designed, achieving real-time detection of its deformations during the crawling process. Thus, the proposed scheme sheds new light on the development of intelligent soft robots.
User-specified inverse kinematics taught in virtual reality reduce time and effort to hand-guide redundant surgical robots
Medical robots should not collide with close by obstacles during medical procedures, such as lamps, screens, or medical personnel. Redundant robots have more degrees of freedom than needed for moving endoscopic tools during surgery and can be reshaped to avoid obstacles by moving purely in the space of these additional degrees of freedom (null space). Although state-of-the-art robots allow surgeons to hand-guide endoscopic tools, reshaping the robot in null space is not intuitive for surgeons. Here we propose a learned task space control that allows surgeons to intuitively teach preferred robot configurations (shapes) that avoid obstacles using a VR-based planner in simulation. Later during surgery, surgeons control both the endoscopic tool and robot configuration (shape) with one hand. In a user study, we found that learned task space control outperformed state-of-the-art naive task space control in all the measured performance metrics (time, effort, and user-perceived effort). Our solution allowed users to intuitively interact with robots in VR and reshape robots while moving tools in medical and industrial applications.
Vision-based tactile sensor design using physically based rendering
High-resolution tactile sensors are very helpful to robots for fine-grained perception and manipulation tasks, but designing those sensors is challenging. This is because the designs are based on the compact integration of multiple optical elements, and it is difficult to understand the correlation between the element arrangements and the sensor accuracy by trial and error. In this work, we introduce the digital design of vision-based tactile sensors using a physically accurate light simulator. The framework modularizes the design process, parameterizes the sensor components, and contains an evaluation metric to quantify a sensor’s performance. We quantify the effects of sensor shape, illumination setting, and sensing surface material on tactile sensor performance using our evaluation metric. The proposed optical simulation framework can replicate the tactile image of the real vision-based tactile sensor prototype without any prior sensor-specific data. Using our approach we can substantially improve the design of a fingertip GelSight sensor. This improved design performs approximately 5 times better than previous state-of-the-art human-expert design at real-world robotic tactile embossed text detection. Our simulation approach can be used with any vision-based tactile sensor to produce a physically accurate tactile image. Overall, our approach enables the automatic design of sensorized soft robots and opens the door for closed-loop co-optimization of controllers and sensors for dexterous manipulation.
Responses