Researchers at ETH Zurich have
recently introduced a new deep learning-based strategy that could enable
tactile sensing in robots without requiring large amounts of real-world data. In
their experiments, researchers used a sensor they built with simple and
low-cost components. This sensor is comprised of a standard camera placed below
a soft material, which contains a random spread of tiny plastic particles. When
a force is applied to its surface, the soft material deforms and causes the
plastic particles to move. This motion is then captured by the sensor's camera
and recorded. Researchers created models of the sensor's soft material and
camera projection using state-of-the-art computational methods. They then used
these models in simulations, to create a dataset of 13,448 synthetic images
that is ideal for training tactile sensing algorithms.
The fact that they were able to
generate training data for their tactile sensing model in simulations is highly
advantageous, as it prevented them from having to collect and annotate data in
the real world. The researchers used the synthetic dataset they created to
train a neural network architecture for vision-based tactile sensing
applications and then evaluated its performance in a series of tests. The
neural network achieved remarkable results, making accurate sensing predictions
on real data, even if it was trained on simulations. In the future, the deep
learning architecture could provide robots with an artificial sense of touch,
potentially enhancing their grasping and manipulation skills. In addition, the
synthetic dataset they compiled could be used to train other models for tactile
sensing or may inspire the creation of new simulation-based datasets.
More information: