Dynamic Time Warping is an algorithm used to match two speech sequence that are same but might differ in terms of length of certain part of speech (phones for example). Here, we’ll not be using phone as a basic unit but frames that are obtained from MFCC features that are obtained from feature extraction through a sliding windows. We will be using 12 extracted features of MFCC per frame which excludes energy.
MFCC Feature Extraction
For MFCC features, we will obtain them using HTK toolbox instead of implementing ourselves. The audio file that we will be using here has a different format from the previous posts if you notice. Before that, we were using an alien format audio file. Here, we will be using an audio file with a fixed header format which is RIFF.
The changes made are as following:
SOURCEFORMAT = ALIEN #file with head length = 0 HEADERSIZE = 1024 SOURCERATE = 625.0 #sampling rate, unit: 100 nsec
SOURCEFORMAT = WAV #RIFF File Header
If you’re curious, the new header size is 44 and the sample rate is defined into the header.
First we need to clarify that there’s a difference between DTW and Viterbi. As from the link, Viterbi algorithm represents pattern matching algorithm of statistic probability. DTW algorithm represents pattern matching algorithm of template matching algorithm. The algorithm we’ll be using here is DTW as no probability is involved.
We’ll implement two methods where there differ in the path restriction. For the first method, the every step is restricted to move to (0,+1),(+1,0) and (+1,+1) from the current point as shown in the diagram below:
We’ll show the result of DTW algorithm one-by-one. The template and test result that we will use here is the phrase 交通大學. We’ll start by matching the template with itself to show that it works as we expect (a straight line should appear):
We then compare to the sample phrase spoken but the test data used here is a faster version of the template:
It can be observed that the curve grows horizontally really fast. We will compare this result with the same speech spoken at a slower pace shown below:
It can be observed that it grows vertically faster than the one above. (Observe the vertical axis)
Now, we’ll show results obtained when compared with difference speech sequence. (We’ll be changing Test Data and we’ll be using speech with the same pace)
Observe how the last extra word correspond to the DTW graph.
Observe how bad of a result produced by an entire different word.
A comparison with different speech sequences are shown below: (Medium pace 交通大學 as template)
All the path with the lowest cost for corresponding pace are shown in italic.
Next, we would like to experiment a looser constraint on the available steps ( (0,+2) (0,+1) (+1,+1) (+1,0) (+2,0) ) to be taken from the current node. An illustration is shown below:
I’ll only show some of the optimal paths for this method as it’s similar to the paths shown above.
Faster version of the template
Slower version of the template
Same pace 交通大隊贊
The most obvious difference can be seen for the DTW of 交通大學贊. There’s some difference at the “tail” of the graph.
It can be observed that the overall cost is lower than a tighter constraint as expected.
Now we would like to clip our template and compare it with the test data given. The templates that we will be clipping are “交通大學贊” and “交通大隊爛” and we would expect to clip the last word our of the sentence resulting in “交通大學” and “交通大隊” respectively. We will set out clip at 0.7 of the template length.
First we show a clipped and unclipped version of “交通大學贊” when matched with “交通大學”
The error obtained when compared with others using “交通大學贊” as a reference is shown in the table below:
Next, we’ll show a clipped and unclipped version of “交通大隊爛” when matched with “交通大隊”
The error obtained when compared with others using “交通大隊爛” as a reference is shown in the table below: