Dilated Convolution for Time Series Learning
Wang Zhang, Subhro Das, et al.
ICASSP 2025
Large language models, commonly known as LLMs, are showing promise in tacking some of the most complex tasks in AI. In this perspective, we review the wider field of foundation models—of which LLMs are a component—and their application to the field of materials discovery. In addition to the current state of the art—including applications to property prediction, synthesis planning and molecular generation—we also take a look to the future, and posit how new methods of data capture, and indeed modalities of data, will influence the direction of this emerging field.
Wang Zhang, Subhro Das, et al.
ICASSP 2025
Albert Atserias, Anuj Dawar, et al.
Journal of the ACM
David W. Jacobs, Daphna Weinshall, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bemali Wickramanayake, Zhipeng He, et al.
Knowledge-Based Systems