Robot Concept Acquisition Based on Interaction Between Probabilistic and Deep Generative Models

Kuniyasu, Ryo and Nakamura, Tomoaki and Taniguchi, Tadahiro and Nagai, Takayuki (2021) Robot Concept Acquisition Based on Interaction Between Probabilistic and Deep Generative Models. Frontiers in Computer Science, 3. ISSN 2624-9898

[thumbnail of pubmed-zip/versions/2/package-entries/fcomp-03-618069-r1/fcomp-03-618069.pdf] Text
pubmed-zip/versions/2/package-entries/fcomp-03-618069-r1/fcomp-03-618069.pdf - Published Version

Download (2MB)

Abstract

We propose a method for multimodal concept formation. In this method, unsupervised multimodal clustering and cross-modal inference, as well as unsupervised representation learning, can be performed by integrating the multimodal latent Dirichlet allocation (MLDA)-based concept formation and variational autoencoder (VAE)-based feature extraction. Multimodal clustering, representation learning, and cross-modal inference are critical for robots to form multimodal concepts from sensory data. Various models have been proposed for concept formation. However, in previous studies, features were extracted using manually designed or pre-trained feature extractors and representation learning was not performed simultaneously. Moreover, the generative probabilities of the features extracted from the sensory data could be predicted, but the sensory data could not be predicted in the cross-modal inference. Therefore, a method that can perform clustering, feature learning, and cross-modal inference among multimodal sensory data is required for concept formation. To realize such a method, we extend the VAE to the multinomial VAE (MNVAE), the latent variables of which follow a multinomial distribution, and construct a model that integrates the MNVAE and MLDA. In the experiments, the multimodal information of the images and words acquired by a robot was classified using the integrated model. The results demonstrated that the integrated model can classify the multimodal information as accurately as the previous model despite the feature extractor learning in an unsupervised manner, suitable image features for clustering can be learned, and cross-modal inference from the words to images is possible.

Item Type: Article
Subjects: EP Archives > Computer Science
Depositing User: Managing Editor
Date Deposited: 07 Dec 2022 10:30
Last Modified: 26 Sep 2023 05:39
URI: http://research.send4journal.com/id/eprint/219

Actions (login required)

View Item
View Item