The relationship between art and music goes back at least as far as the depiction of instruments and musicians on ancient walls and vases. In more recent centuries, some artists, composers, and theorists have tried to define and explore the relationship between visual and musical concepts more abstractly.

Why should two seemingly completely separate creative domains be related to one another at all? It turns out that even the typical human is generally able to form relationships between different sensory inputs, even when it is not always clear how those relationships are formed or what they are based off of.

In the world of artificial intelligence, this insight has led to a long line of work in exploring multimodal machine learning. These works are built on the idea that, for machines to more successfully reason about and navigate the human world, models need to be able to process and interpret multimodal signals.

In this work, we are interested in exploring the relationship between art and music, and more broadly, are motivated by questions of cross-modal perception. We apply techniques from multimodal machine learning to a novel domain, paintings and classical music, in order to learn a shared representation between two different creative modalities. Our results demonstrate that such a representation can be achieved even with limited supervision.

Our embedding space is one that is chronologically organized; works that were created close in time to one another lie close to one another in this embedding space, regardless of their modality (paintings or music).

We hypothesize that future work can improve upon and use such a representation to pro- pose relationships between works from these two domains. Doing so could provide valuable insights about the shared culture two works come from, or about the basis of cross-modal perception.




Download Full History