Related Articles
Evaluation of a newly developed oral and maxillofacial surgical robotic platform (KD-SR-01) in head and neck surgery: a preclinical trial in porcine models
Traditional open head and neck surgery often leaves permanent scars, significantly affecting appearance. The emergence of surgical robots has introduced a new era for minimally invasive surgery. However, the complex anatomy of the head and neck region, particularly the oral and maxillofacial areas, combined with the high costs associated with established systems such as the da Vinci, has limited the widespread adoption of surgical robots in this field. Recently, surgical robotic platform in China has developed rapidly, exemplified by the promise shown by the KangDuo Surgical Robot (KD-SR). Although the KD-SR has achieved some results comparable to the da Vinci surgical robot in urology and colorectal surgery, its performance in complex head and neck regions remains untested. This study evaluated the feasibility, effectiveness, and safety of the newly developed KD-SR-01, comparing it with standard endoscopic systems in head and neck procedures on porcine models. We performed parotidectomy, submandibular gland resection, and neck dissection, collected baseline characteristics, perioperative data, and specifically assessed cognitive workload using the NASA-TLX. None of the robotic procedures were converted to endoscopic or open surgery. The results showed no significant difference in operation time between the two groups (P = 0.126), better intraoperative bleeding control (P = 0.001), and a significant reduction in cognitive workload (P < 0.001) in the robotic group. In conclusion, the KD-SR-01 is feasible, effective, and safe for head and neck surgery. Further investigation through well-designed clinical trials with long-term follow-up is necessary to establish the full potential of this emerging robotic platform.
Vision-based tactile sensor design using physically based rendering
High-resolution tactile sensors are very helpful to robots for fine-grained perception and manipulation tasks, but designing those sensors is challenging. This is because the designs are based on the compact integration of multiple optical elements, and it is difficult to understand the correlation between the element arrangements and the sensor accuracy by trial and error. In this work, we introduce the digital design of vision-based tactile sensors using a physically accurate light simulator. The framework modularizes the design process, parameterizes the sensor components, and contains an evaluation metric to quantify a sensor’s performance. We quantify the effects of sensor shape, illumination setting, and sensing surface material on tactile sensor performance using our evaluation metric. The proposed optical simulation framework can replicate the tactile image of the real vision-based tactile sensor prototype without any prior sensor-specific data. Using our approach we can substantially improve the design of a fingertip GelSight sensor. This improved design performs approximately 5 times better than previous state-of-the-art human-expert design at real-world robotic tactile embossed text detection. Our simulation approach can be used with any vision-based tactile sensor to produce a physically accurate tactile image. Overall, our approach enables the automatic design of sensorized soft robots and opens the door for closed-loop co-optimization of controllers and sensors for dexterous manipulation.
A thalamic hub-and-spoke network enables visual perception during action by coordinating visuomotor dynamics
For accurate perception and motor control, an animal must distinguish between sensory experiences elicited by external stimuli and those elicited by its own actions. The diversity of behaviors and their complex influences on the senses make this distinction challenging. Here, we uncover an action–cue hub that coordinates motor commands with visual processing in the brain’s first visual relay. We show that the ventral lateral geniculate nucleus (vLGN) acts as a corollary discharge center, integrating visual translational optic flow signals with motor copies from saccades, locomotion and pupil dynamics. The vLGN relays these signals to correct action-specific visual distortions and to refine perception, as shown for the superior colliculus and in a depth-estimation task. Simultaneously, brain-wide vLGN projections drive corrective actions necessary for accurate visuomotor control. Our results reveal an extended corollary discharge architecture that refines early visual transformations and coordinates actions via a distributed hub-and-spoke network to enable visual perception during action.
Clinical validity of fluorescence-based devices versus visual-tactile method in detection of secondary caries around resin composite restorations: diagnostic accuracy study
To assess the validity of light-induced and laser-induced fluorescence devices compared to the visual-tactile method for detecting secondary caries around resin composite restorations.
Molecular optimization using a conditional transformer for reaction-aware compound exploration with reinforcement learning
Designing molecules with desirable properties is a critical endeavor in drug discovery. Because of recent advances in deep learning, molecular generative models have been developed. However, the existing compound exploration models often disregard the important issue of ensuring the feasibility of organic synthesis. To address this issue, we propose TRACER, which is a framework that integrates the optimization of molecular property optimization with synthetic pathway generation. The model can predict the product derived from a given reactant via a conditional transformer under the constraints of a reaction type. The molecular optimization results of an activity prediction model targeting DRD2, AKT1, and CXCR4 revealed that TRACER effectively generated compounds with high scores. The transformer model, which recognizes the entire structures, captures the complexity of the organic synthesis and enables its navigation in a vast chemical space while considering real-world reactivity constraints.
Responses