Next Bar Predictor: An Architecture in Automated Music Generation

Document Type

Conference Proceeding

Publication Date

2020

Abstract

Music generation has been an active field of research in computer science and is considered as a creative task attempting to imitate human creativity. With the different approaches to generate musical content, recent works have focused on general adversarial networks. One of these is the Midinet, which is considered the baseline model in this study. In this paper, we propose our Next Bar Predictor, a generative model that creates melody one bar at a time using the previous bar as basis to generate aesthetically pleasing melodies. We explore several variants of this by experimenting on different regression and classification models such as Decision Trees (DT), K-Nearest Neighbors (KNN), and Multilayer Perceptron (MLP). The models were trained using the dataset from Theorytab which consists of 460 songs. The outputs of these different variant models were then compared against those from the Midinet, using both machine-based objective scoring mechanism as well as human-based subjective evaluations. The dissimilarity scores obtained by our KNN (0.65) and DT (0.74) models, scored against the melodies in the dataset, are sufficiently high and indicates that both models are generally creative. Furthermore, based on the evaluation by human listeners, the melodies generated by our DT models are more realistic and pleasing than those of the Midinet. Casual listeners also prefer the DT model to be more interesting, although professional listeners think otherwise. Finally, all the variant models, when compared with Midinet, require much less training time and computational power. The proposed Next Bar Predictor is therefore a viable alternative for automated music generation.

Share

COinS