Improving the generative performance of chemical autoencoders through transfer learning

Iovanac, Nicolae C and Savoie, Brett M (2020) Improving the generative performance of chemical autoencoders through transfer learning. Machine Learning: Science and Technology, 1 (4). 045010. ISSN 2632-2153

[thumbnail of Iovanac_2020_Mach._Learn.__Sci._Technol._1_045010.pdf] Text
Iovanac_2020_Mach._Learn.__Sci._Technol._1_045010.pdf - Published Version

Download (1MB)

Abstract

Generative models are a sub-class of machine learning models that are capable of generating new samples with a target set of properties. In chemical and materials applications, these new samples might be drug targets, novel semiconductors, or catalysts constrained to exhibit an application-specific set of properties. Given their potential to yield high-value targets from otherwise intractable design spaces, generative models are currently under intense study with respect to how predictions can be improved through changes in model architecture and data representation. Here we explore the potential of multi-task transfer learning as a complementary approach to improving the validity and property specificity of molecules generated by such models. We have compared baseline generative models trained on a single property prediction task against models trained on additional ancillary prediction tasks and observe a generic positive impact on the validity and specificity of the multi-task models. In particular, we observe that the validity of generated structures is strongly affected by whether or not the models have chemical property data, as opposed to only syntactic structural data, supplied during learning. We demonstrate this effect in both interpolative and extrapolative scenarios (i.e., where the generative targets are poorly represented in training data) for models trained to generate high energy structures and models trained to generated structures with targeted bandgaps within certain ranges. In both instances, the inclusion of additional chemical property data improves the ability of models to generate valid, unique structures with increased property specificity. This approach requires only minor alterations to existing generative models, in many cases leveraging prediction frameworks already native to these models. Additionally, the transfer learning strategy is complementary to ongoing efforts to improve model architectures and data representation and can foreseeably be stacked on top of these developments.

Item Type: Article
Subjects: EP Archives > Multidisciplinary
Depositing User: Managing Editor
Date Deposited: 30 Jun 2023 04:23
Last Modified: 06 Oct 2023 12:58
URI: http://research.send4journal.com/id/eprint/2461

Actions (login required)

View Item
View Item