Network-in-Network Implementation using TensorFlow

Introduction In this Lab, we will be implementing Network In Network¬†[1] where its purpose is to enhance model discriminability for local patches within the receptive field. Conventional convolutional layers uses linear filters followed by a nonlinear activation function. The downside of the conventional method is the local receptors are too simple and doesn’t project local […]

Read more "Network-in-Network Implementation using TensorFlow"

Force Alignment using HMM 2

This is the continuation of the post before. This discussion on this post is: The variation of accuracy and correctness based on the number of observation mixtures. How does adding noise affect our recognition accuracy (The experiment done in our last post involves clean test input) The confusion matrix of our test results     […]

Read more "Force Alignment using HMM 2"

GMM-Based Speaker Recognition

Introduction GMM vs K-Means First, we’ll have to understand what are hard decisions and soft decisions . Hard Decision A data point is clustered to a single cluster and the results are final. Soft Decision A data point is modeled by a distribution of clusters, thus it will be probabilistically defined and there’s no definite […]

Read more "GMM-Based Speaker Recognition"

Linear Models for Classification

Introduction There’re 3 major methods on working with classification: Discriminant Function Probabilistic Generative Model Probabilistic Discriminative Model The first method is brute-force method which is what neural networks uses. It consist of least square classification, Fisher’s linear discriminant and perceptron algorithm. Our focus will be on the following 2 models which involves a probabilistic viewpoint. […]

Read more "Linear Models for Classification"