Image by Paweł Czerwiński

Brain rhythms readily synchronize with auditory rhythms, like those in music and speech. Although brain–environment synchrony is associated with positive behavioral outcomes – we better understand and remember material when brain–environment synchrony is tighter – we’re missing an understanding of why one person might succeed in a listening situation while another might fail. The Research Group Neural and Environmental Rhythms takes a dynamical-systems approach to understanding brain–environment synchrony, conceptualizing and modeling brain rhythms as being generated by neural oscillators (and testing that assumption along the way). We combine individual-differences, experimental, and lifespan (later half for now) approaches, and make use of psychophysics, electrophysiology (M/EEG), and modeling to move towards a more wholistic, mechanistic understanding of brain–environment synchrony and its role in auditory perception.


Reliability of neural entrainment of the human auditory system

project leader: Yuranny Cabral-Calderin

Image by FPVmat A

Auditory stimuli are often rhythmic in nature. Brain activity synchronizes with auditory rhythms via neural entrainment, and entrainment seems to be beneficial for auditory perception. However, it is not clear to what extent neural entrainment in the auditory system is reliable over time – a necessary prerequisite for targeted intervention. The current work established the reliability of neural entrainment over time and predicted individual differences in auditory perception from associated neural activity. Across two different sessions, listeners detected silent gaps presented at different phase locations of a 2-Hz frequency modulated (FM) noise while EEG activity was recorded. Our results demonstrate that neural entrainment in the auditory system and the resulting behavioral modulation are reliable over time, and that both entrained delta and non-entrained alpha oscillatory activity contribute to near-threshold stimulus perception.

Keywords – Entrainment, reliability, auditory perception, gap detection, EEG, oscillations, tACS, fMRI


Neural entrainment to natural music

project leader: Kristin Weineck

Image by Eric Nopanen

Neural activity in the auditory system synchronizes to sound rhythms, where sound rhythms are often operationalized in terms of the sound’s amplitude envelope. We hypothesized that – especially for music – the envelope might not best capture the complex spectro-temporal fluctuations that give rise to beat perception and synchronize neural activity. This study investigated 1) neural entrainment to different musical features, 2) tempo-dependence of neural entrainment, and 3) dependence of entrainment on familiarity, enjoyment, and ease of beat perception. In this EEG study, participants listened to tempo-modulated music (1–4 Hz). Converging analysis approaches showed that the spectral flux of music – as opposed to the amplitude envelope – evoked strongest neural entrainment. Moreover, music with slower beat rates, high familiarity, and easy-to-perceive beats elicited the strongest neural response. 

Keywords – Neural entrainment, tempo, EEG, music perception, music features


Tap to the beat (the time is now)

project leader: Ece Kaya

Image by Alexey Ruban

This study aims to explore individual differences in our ability to synchronize our body movements with auditory rhythms (“rhythmic entrainment”). We work from the assumption that motor synchronization is accomplished by entrainment of an oscillatory mechanism, and we focus on two properties of this internal oscillator: preferred rate (sometimes referred to as preferred tempo) and flexibility. The experiment is a synchronization–continuation tapping paradigm with a twist: Over the course of 400 trials, participants tap to 400 different tempi, and the trial-to-trial differences in tempo are maximized so that participants’ ability to flexibly adjust is pushed to the limit. We estimated preferred rate by quantifying the rate(s) at which each participant matched the stimulus rate with minimum tempo-matching error. We estimated flexibility by quantifying the degree to which tempo-matching errors depended on trial-by-trial rate differences. The novel analysis methods we developed revealed reliable estimates of preferred rate and flexibility for participants who completed two identical sessions.


Rhythm production (Tap2Card)

project leader: Olivia Wen

Image by Louise Patterton

This study is designed to measure individual's internal preferred tempo (also referred to as preferred rate) – the tempo at which individuals prefer to produce, listen to, and perceive auditory stimuli. We designed a novel rhythm task, where individuals produce rhythms from a schematic, pictorial representation. No auditory stimuli are presented at any point during the experiment, so that there are no cues to the tempo (rate) at which the rhythm should be produced. Instead, we measure the spontaneous tempo/rate at which each participant naturally produces the rhythms. We examine a number of contributors to the quality and tempo of the rhythm productions, for example: meter (duple vs. triple), rhythmic complexity (measured using a model-based approach), amount of practice on the task, musical experience and skill.


Neural entrainment to rhythmically irregular sounds

project leader: Lea Kërçiku, Yuranny Cabral-Calderin, Vera Komeyer

Here, we explore under what circumstances neural entrainment breaks down and how this differs between individuals, with the goal to better understand listening success in suboptimal environments. We measure EEG while participants detect gaps embedded in frequency modulated sounds, where the degree of regularity conveyed by the frequency modulation is parametrically varied. Thus, we challenge the brain's ability to entrain to rhythms with less and less rhythmicity. This project aims not only to assess individual differences in the degree to which entrainment breaks down with decreasing temporal regularity, but will reveal neural "compensatory" mechanisms that support listening when a "rhythmic mode" is unavailable. 

Keywords – Entrainment, irregularity, flexibility, EEG, gap detection, auditory perception, oscillations

Image by Steve Johnson

Preferred rate is category specific

project leader: Olivia Wen

Image by Pawel Czerwinski

Each human has a preferred tempo at which they feel most comfortable listening to and interacting with rhythms. In order to measure this, we asked participants to continuously adjust the tempo of many sound stimuli using a slider until the participant arrived at the tempo that was most comfortable for them. The sounds spanned four categories: 1) artificial stimuli: metronome and amplitude modulation; 2) natural sounds, such as dog drinking water, brushing teeth, and walking on leaves; 3) music: instrumental excerpts from genres Rock, Jazz, Pop, Techno, Latin, Country, and Hip Hop; and 4) speech: English, German, and Italian spoken passages. Interestingly, we find that individual's preferred tempo is category specific and does not generalize across categories.


An exhaustive search for 12-unit metrically ambiguous rhythms

project leader: Matt Moore

Image by Mihály Köles

An ambiguous rhythm is any rhythm where the musical beat can be felt in multiple contradictory ways. We exhaustively searched a metrical space to find rhythms that are perceived to be maximally ambiguous. First, we generated every possible unique rhythm within a 12-unit meter; we presented them to participants at two tempi in an online experiment and asked them to tap the beat on their keyboards. We identified several rhythms displaying ambiguous characteristics, which we classified into two types: “within-participant” ambiguity, where a single participant switches metrical interpretations between two presentations of the same rhythm, and “between-participant” ambiguity, where half of participants hear one meter while half hear another. Using these validated ambiguous rhythms, we plan to develop better methods to induce metrical switching, with the ultimate goal of measuring neural responses pre- and post-switch.


EEG-based decoding of auditory attention using convolutional neural networks

project leader: Keyvan Mahjoory

Image by Alina Grubnyak

In a multi-speaker scenario, human listeners are able to attend to one particular speaker of interest  and  ignore  others.  Previous  studies have  shown  that  spatial  location  of  the attended   speaker   can   be   decoded   with   70-80%   accuracy   based   on   electroencephalography (EEG) recordings. However, for real world applications, hearing aids for  example,  finding  the  minimal  EEG  set-up  that  achieves similar  decoding  performance would  be  of  great  interest.  In  this  study,  we  used  publicly  available EEG  data from participants  attending  to  one  of  two  spatially  separated  (left  and  right) speech  audio  streams  and  ignoring  the  other.  We  trained  a convolutional neural  network  on  broadband  (1-30  Hz)  EEG  time  series,  and tested the model on subsets of EEG channels and frequency bands. We exploited a data-driven approach, training a model on EEG time series rather than pre-selected features, and showed that alpha lateralization is the main predictor of spatial attention. In addition, our minimal EEG-setup recommendation could be beneficial for hearing-aid applications coupled to “wearable” EEG.


An online slider paradigm for temporal extrapolation

project leader: Olivia Wen

Image by Christopher Burns

The psychophysical task of mentally extrapolating a rhythm through a silent gap can be used to estimate oscillator flexibility: a flexible oscillator will maintain the stimulus dynamics through the silent gap, while an inflexible oscillator will decay quickly back to its own endogenous dynamics that might not match the stimulus. Thus, analyzing timing-judgment errors will allow us to estimate individuals’ oscillator flexibility. In a series of experiments, participants listened to a sequence of clicks, extrapolated through a silent gap, and rated the timing of the final click on a continuous slider. Across these experiments, we have validated the versatility and sensitivity of the online slider paradigm to improve the efficiency and resolution of the extrapolation task over and above laborious in-lab approaches using binary-choice paradigms.