Autonomous systems face the intricate challenge of navigating unpredictable environments and interacting with ex-ternal objects. The successful integration of robotic agents into real-world situations hinges on their perception capabilities, which involve amalgamating world models and predictive skills. Effective perception models build upon the fusion of various sensory modalities to probe the surroundings. Deep learning applied to raw sensory modalities offers a viable option. However, learning-based perceptive representations become difficult to interpret. This challenge is particularly pronounced in soft robots, where the compliance of structures and materials makes prediction even harder. Our work addresses this complexity by harnessing a generative model to construct a multi-modal perception model for soft robots and to leverage proprioceptive and visual information to anticipate and interpret contact interactions with external objects. A suite of tools to interpret the perception model is furnished, shedding light on the fusion and prediction processes across multiple sensory inputs after the learning phase. We will delve into the outlooks of the perception model and its implications for control purposes.
Towards Interpretable Visuo-Tactile Predictive Models for Soft Robot Interactions
Donato, Enrico;Thuruthel, Thomas George;Falotico, Egidio
2024-01-01
Abstract
Autonomous systems face the intricate challenge of navigating unpredictable environments and interacting with ex-ternal objects. The successful integration of robotic agents into real-world situations hinges on their perception capabilities, which involve amalgamating world models and predictive skills. Effective perception models build upon the fusion of various sensory modalities to probe the surroundings. Deep learning applied to raw sensory modalities offers a viable option. However, learning-based perceptive representations become difficult to interpret. This challenge is particularly pronounced in soft robots, where the compliance of structures and materials makes prediction even harder. Our work addresses this complexity by harnessing a generative model to construct a multi-modal perception model for soft robots and to leverage proprioceptive and visual information to anticipate and interpret contact interactions with external objects. A suite of tools to interpret the perception model is furnished, shedding light on the fusion and prediction processes across multiple sensory inputs after the learning phase. We will delve into the outlooks of the perception model and its implications for control purposes.| File | Dimensione | Formato | |
|---|---|---|---|
|
Towards_Interpretable_Visuo-Tactile_Predictive_Models_for_Soft_Robot_Interactions.pdf
solo utenti autorizzati
Tipologia:
Documento in Post-print/Accepted manuscript
Licenza:
Copyright dell'editore
Dimensione
2.7 MB
Formato
Adobe PDF
|
2.7 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

