Introduction In this experiment, we will be using VGG19 which is pre-trained on ImageNet on Cifar-10 dataset. We will be using PyTorch for this experiment. (A Keras version is also available) VGG19 is well known in producing promising results due to the depth of it. The “19” comes from the number of layers it has. […]Read more "Transfer Learning of VGG19 on Cifar-10 Dataset using PyTorch"
Introduction In this Lab, we will be implementing Network In Network  where its purpose is to enhance model discriminability for local patches within the receptive field. Conventional convolutional layers uses linear filters followed by a nonlinear activation function. The downside of the conventional method is the local receptors are too simple and doesn’t project local […]Read more "Network-in-Network Implementation using TensorFlow"
This is the continuation of the post before. This discussion on this post is: The variation of accuracy and correctness based on the number of observation mixtures. How does adding noise affect our recognition accuracy (The experiment done in our last post involves clean test input) The confusion matrix of our test results […]Read more "Force Alignment using HMM 2"
Introduction The advantage of neural networks over other methods is due to their non-linearity. The non-linearity is caused by the linear combinations of the activation functions used. The activation functions that we will be using here is Sigmoid and ReLU. Let be the output of a neuron after a linear combination of its input neurons […]Read more "Neural Network for Multiclass Classification"
Introduction GMM vs K-Means First, we’ll have to understand what are hard decisions and soft decisions . Hard Decision A data point is clustered to a single cluster and the results are final. Soft Decision A data point is modeled by a distribution of clusters, thus it will be probabilistically defined and there’s no definite […]Read more "GMM-Based Speaker Recognition"
Introduction There’re 3 major methods on working with classification: Discriminant Function Probabilistic Generative Model Probabilistic Discriminative Model The first method is brute-force method which is what neural networks uses. It consist of least square classification, Fisher’s linear discriminant and perceptron algorithm. Our focus will be on the following 2 models which involves a probabilistic viewpoint. […]Read more "Linear Models for Classification"
Project Design Raspberry Pi 2 as computational unit on Vehicle. Computation of SIFT is done on NOTEBOOK. (Will be moved to FPGA for hardware acceleration in computation.) A D-Link AP will be the central of the network connection between Raspberry Pi 2, Notebook and FPGA. Object Tracking Method 1 Matching Box must be at the […]Read more "Object Tracking using SIFT on Autonomous Vehicle"