The following is a non-exhaustive list of content relevant to the ISMIR community, submitted by its members. Keep in mind that it is provided in good faith and a result of user submissions. Therefore, if something should (or shouldn't) be here, let us know.

Research Centers

Group / Description Location
Music and Audio Research Laboratory at New York University
Doctoral and masters programs in music technology in the heart of New York City. Main research areas include MIR, Immersive Audio, Music Cognition and Interactive Systems.
MIR Group at the University of Coimbra PT
Music Information Retrieval Lab at Vienna University of Technology AT
Music Technology at Georgia Tech US
Music Informatics Research Group at City University London UK
Research lab on computational music analysis, focusing on large-scale analysis of scores.
Department of Computational Perception of the Johannes Kepler University Linz AT
Centre for Digital Music (C4DM) at Queen Mary University of London
C4DM is a world-leading multidisciplinary research group in the field of music and audio technology. Research ranges from record/replay equipment to the simulation and synthesis of instruments and voices, acoustic spaces, music understanding, delivery and retrieval. With a strong focus on making innovation usable, we are ideally placed to work with industry leaders in forging new business models for the music industry.
Audio Analysis Lab at Aalborg University DK
Special Interest Group on Music Analysis DE
Music Technology Group at UPF
The Music Technology Group (MTG) of the Universitat Pompeu Fabra in Barcelona is specialized in sound and music computing. With more than 50 researchers, the MTG carries out research on topics such as audio signal processing, sound and music description, musical interfaces, sound and music communities and performance modeling among others.
Intelligent Music Processing Group at the Austrian Research Institute for Artificial Intelligence AT
Music and Entertainment Technology Laboratory (MET-lab) at Drexel
The Music and Entertainment Technology Laboratory (MET-lab) is devoted to research in digital media technologies that will shape the future of entertainment. MET-lab's primary research focus encompasses several areas: music information retrieval, music production technology, new musical interfaces, and musical humanoid robotics. The lab also emphasizes K-12 outreach and hosts Summer Music Technology, a one-week experience based educational curriculum for high school students.
Music Engineering at the University of Miami US
The Centre of Interdisciplinary Research in Music Media and Technology (CIRMMT) at McGill CA
Institut de Recherche et Coordination Acoustique / Musique (IRCAM) FR
International Audio Laboratories Erlangen
The group Semantic Audio Processing deals with the development of techniques and tools for analyzing, structuring, retrieving, navigating, and presenting music-related audio signals and other time-dependent multimedia data streams.
Application of Information and Communication Technologies Research Group ES

International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
International Computer Music Association
Sound and Music Computing (SMC)
Neural Information Processing Systems (NIPS)
IEEE / ACM Transactions on Audio, Speech, and Language Processing (TASLP)
Music Perception
Journal of New Music Research

Educational Materials

Description Link
William A. Sethares. "Tuning, Timbre, Spectrum, Scale." Springer, London, 1998. [link]
Anssi P. Klapuri and Manuel Davy, editors. "Signal Processing Methods for Music Transcription." Springer, New York, 2006. [link]
William Arthur Sethares. "Rhythm and Transforms." Springer, 2007. [link]
Meinard Müller. "Information Retrieval for Music and Motion." Springer, 2007. [link]
Òscar Celma. "Music Recommendation and Discovery: The Long Tail, Long Fail, and Long Play in the Digital Music Space." Springer, 2010. [link]
Alexander Lerch. "An Introduction to Audio Content Analysis." Wiley, 2012. [link]
Meinard Müller. "Fundamentals of Music Processing." Springer, 2015. [link]
Peter Knees and Markus Schedl. "Music Similarity and Retrieval: An Introduction to Audio- and Web-based Strategies." Springer, 2016 [link]
Claus Weihs, Dietmar Jannach, Igor Vatolkin, Guenter Rudolph. "Music Data Analysis: Foundations and Applications." Chapman & Hall/CRC Computer Science & Data Analysis, 2016 [link]
Matlab code for feature extraction, pitch tracking, key detection, onset detection, and links to data sets and MIR-related software projects. [link]
A centralized collection of teaching resources related to Music Information Retrieval. It is addressed to teachers and students interested on these technologies from an educational point of view. Current resources include the following: a list of courses related to MIR in different levels, institutions and countries; a collaborative (small) list of teaching materials, such as exercises, musical examples, code; and a list of datasets and reference annotations. [link]
"This tutorial provides a survey of the field of Music Information Retrieval (MIR), that aims, among other things, at automatically extracting semantically meaningful information from various representations of music entities, such as audio, scores, lyrics, web pages or microblogs. The tutorial is designed for students, engineers, researchers, and data scientists who are new to MIR and want to get introduced to the field." [link]
An article that provides a MIR survey. [link]
A book on the "Fundamentals of Music Processing". [link]

Software Tools

Description Link
jMIR is an open-source software suite implemented in Java for use in music classification research. It can be used to study music in both audio and symbolic formats, as well as mine cultural information from the web and manage music collections. jMIR includes software for extracting features, applying machine learning algorithms, mining metadata and analyzing metadata. [link]
Essentia is an open-source C++ library for audio analysis and audio-based music information retrieval released under the Affero GPLv3 license (also available under proprietary license upon request) which has been developed by the Music Technology Group in Universitat Pompeu Fabra. Essentia was awarded with the Open-Source Competition of ACM Multimedia in 2013. [link]
LibRosa is a python package for music and audio processing. [link]
Time-scale Modification (TMS) Toolbox - MATLAB implementations of various classical time-scale modification algorithms like OLA, WSOLA, and the phase vocoder, among more recent advances. [link]
Chroma Toolbox - MATLAB implementations for extracting various types of novel pitch-based and chroma-based audio features. [link]
Tempogram Toolbox - MATLAB implementations for extracting various types of recently proposed tempo and pulse related audio representations. [link]
Similarity Matrix (SM) Toolbox - MATLAB implementations for computing and enhancing similarity matrices in various ways. [link]


Description Link
Primary resource for the Million Song Dataset [link]
Various datasets hosted by the MTG [link]
Alternative resource for the Million Song Dataset: Feature Sets and Benchmark splits. [link]
Home page of the McGill Billboard annotations, containing time-aligned chord annotations and structural analyses of a randomised sample of over 1000 songs from the American Billboard Hot 100 charts between 1958 and 1991. [link]
A dataset of MusicXML excerpts and corresponding Schenkerian analyses in a computer-readable format. [link]
Reference data for computational music analysis. Now contains a dataset of ground truth structures for fugues. [link]
The "Million Musical Tweets Dataset" (MMTD) contains listening histories inferred from microblogs. Each listening event identified via twitter-id and user-id is annotated with temporal (date, time, weekday, timezone), spatial (longitude, latitude, continent, country, county, state, city), and contextual (information on the country) information. In addition, pointers to artist and track are provided as a matter of course. [link]
The "MusicMicro 11.11-09.12" data set contains listening histories inferred from microblogs. Each listening event identified via twitter-id and user-id is annotated with temporal (month and weekday) and spatial (longitude, latitude, country, and city) information. In addition, pointers to artist and track are provided as a matter of course. [link]
The MusiClef 2012 - Multimodal Music Data Set provides editorial metadata, various audio features, user tags, web pages, and expert labels on a set of 1355 popular songs. It was used in the MusiClef 2012 Evaluation Campaign. [link]
Index of contents of the GTZAN dataset. [link]
List of replicas found in the Latin Music Database. [link]
List of replicas found in the Ballroom dataset. [link]
Saarland Music Data (SMD) - SMD supplies free music recordings of Western classical music (SMD Western Music) as well as MIDI-audio pairs (SMD MIDI-Audio Piano Music), which have been generated by using hybrid acoustic / digital pianos (Disklavier). [link]
Music Synchronization for RWC Music Database (Classical Music) - Website for synchronized MIDI-audio pairs obtained from the synchronization procedure. [link]