%matplotlib inline
import numpy, scipy, matplotlib.pyplot as plt, IPython.display as ipd
import librosa, librosa.display
import stanford_mir; stanford_mir.init()
from ipywidgets import interact
librosa.beat.beat_track
¶Load an audio file:
x, sr = librosa.load('audio/58bpm.wav')
ipd.Audio(x, rate=sr)
Use librosa.beat.beat_track
to estimate the beat locations and the global tempo:
tempo, beat_times = librosa.beat.beat_track(x, sr=sr, start_bpm=60, units='time')
print(tempo)
print(beat_times)
Plot the beat locations over the waveform:
plt.figure(figsize=(14, 5))
librosa.display.waveplot(x, alpha=0.6)
plt.vlines(beat_times, -1, 1, color='r')
plt.ylim(-1, 1)
Plot a histogram of the intervals between adjacent beats:
beat_times_diff = numpy.diff(beat_times)
plt.figure(figsize=(14, 5))
plt.hist(beat_times_diff, bins=50, range=(0,4))
plt.xlabel('Beat Length (seconds)')
plt.ylabel('Count')
Visually, it's difficult to tell how correct the estimated beats are. Let's listen to a click track:
clicks = librosa.clicks(beat_times, sr=sr, length=len(x))
ipd.Audio(x + clicks, rate=sr)
Use the IPython interactive widgets to observe how the output changes as we vary the parameters of the beat tracker.
def f(start_bpm, tightness_exp):
return librosa.beat.beat_track(x, sr=sr, start_bpm=start_bpm, tightness=10**tightness_exp, units='time')
interact(f, start_bpm=60, tightness_exp=2)
Try other audio files:
ls audio