Digit Classification Using HOG Features - MATLAB ...
www.mathworks.com › ... › Object Detection and Recognition
Acquire a labeled data set with images of the desired object. ... this example shows how to classify numerical digits using HOG (Histogram of Oriented Gradient) ...
MathWorks
Histogram of oriented gradients - Wikipedia, the free ...
https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients
The histogram of oriented gradients (HOG) is a feature descriptor used in computer ... Specifically, this method requires filtering the color or intensity data of the ...
Wikipedia
Histograms of Oriented Gradients for Landmine Detection in ...
ieeexplore.ieee.org/.../0651...
Institute of Electrical and Electronics Engineers
by PA Torrione - 2014 - Cited by 23 - Related articles
ground-penetrating radar (GPR), histogram of oriented gradients. (HOG), random forest. .... in GPR data use statistical feature extraction (often motivated implicitly or ..... These parameter settings were chosen for numerical convenience, but ...[PPT]Histograms of Oriented Gradients for Human Detection
homes.cs.washington.edu/~shapiro/.../HOG.ppt...
Histograms of Oriented Gradients for Human Detection ... Contrast normalize each histogram using 4 adjacent/overlapping blocks, giving 36 numeric values for ...
University of Washington
Cell Detection from Microscope Images Using Histogram of ...
www.academia.edu/.../Cell_Detection_from_Microscope_I...
Object Detection 11 3.1 Histogram of Oriented Gradients . ... Introduction 2 process large volumes of data quickly and do not get tired or bored. .... on images captured from the real world in order to produce numerical or symbolic information.
Academia.edu
[PDF]Lecture 2: The SVM classifier
www.robots.ox.ac.uk/~az/lectures/ml/lect2.pdf
For a linear classifier, the training data is used to learn w and then discarded. Only w is .... Feature: histogram of oriented gradients (HOG). Feature vector ...
University of Oxford
PSVM: Support Vector Machines for Processing - Makematics
makematics.com/code/psvm/
To use SVM, you train the algorithm by providing it with example data that you ... spam, to recognize objects in images, and to detect trends in numerical data. ..... of two different methods: color histograms and Histogram of Oriented Gradients.Lung Cancer Prediction Using Neural Network Ensemble ...
www.hindawi.com/journals/tswj/2015/786013/
by E Adetiba - 2015 - Cited by 5 - Related articles
Jan 29, 2015 - The histogram of oriented gradient (HOG) and local binary pattern (LBP) .... The general statistics of the acquired data for both normal and mutated samples ... The approaches for numerical encoding of DNA sequences are ...[PDF]Histogram of Oriented Gradients and Bag of Feature Method
worldcomp-proceedings.com/proc/p2013/IPC4143.pdf
by LR Cerna - Cited by 2 - Related articles
Keywords: Face Detection, Histogram of Oriented Gradients, .... methods, they convert numerical vectors to “codewords” ... of training data (images features).Image Analysis and Recognition: 11th International ...
https://books.google.com/books?isbn=3319117580
Aurélio Campilho, Mohamed Kamel - 2014 - Computers
... data, we propose an unsupervised classification stage based on k-means devoted to the discretization of the numerical Histograms of Oriented Gradient ...
Digit Classification Using HOG Features
This example shows how to classify digits using HOG features and a multiclass SVM classifier.
Object classification is an important task in many computer vision applications, including surveillance, automotive safety, and image retrieval. For example, in an automotive safety application, you may need to classify nearby objects as pedestrians or vehicles. Regardless of the type of object being classified, the basic procedure for creating an object classifier is:
- Acquire a labeled data set with images of the desired object.
- Partition the data set into a training set and a test set.
- Train the classifier using features extracted from the training set.
- Test the classifier using features extracted from the test set.
To illustrate, this example shows how to classify numerical digits using HOG (Histogram of Oriented Gradient) features [1] and a multiclass SVM (Support Vector Machine) classifier. This type of classification is often used in many Optical Character Recognition (OCR) applications.
The example uses the
fitcecoc
function from the Statistics and Machine Learning Toolbox™ and the extractHOGFeatures
function from the Computer Vision System Toolbox™.Digit Data Set
Synthetic digit images are used for training. The training images each contain a digit surrounded by other digits, which mimics how digits are normally seen together. Using synthetic images is convenient and it enables the creation of a variety of training samples without having to manually collect them. For testing, scans of handwritten digits are used to validate how well the classifier performs on data that is different than the training data. Although this is not the most representative data set, there is enough data to train and test a classifier, and show the feasibility of the approach.
In this example, the training set consists of 101 images for each of the 10 digits. The test set consists of 12 images per digit.
Prior to training and testing a classifier, a pre-processing step is applied to remove noise artifacts introduced while collecting the image samples. This provides better feature vectors for training the classifier.
Using HOG Features
The data used to train the classifier are HOG feature vectors extracted from the training images. Therefore, it is important to make sure the HOG feature vector encodes the right amount of information about the object. The
extractHOGFeatures
function returns a visualization output that can help form some intuition about just what the "right amount of information" means. By varying the HOG cell size parameter and visualizing the result, you can see the effect the cell size parameter has on the amount of shape information encoded in the feature vector:
The visualization shows that a cell size of [8 8] does not encode much shape information, while a cell size of [2 2] encodes a lot of shape information but increases the dimensionality of the HOG feature vector significantly. A good compromise is a 4-by-4 cell size. This size setting encodes enough spatial information to visually identify a digit shape while limiting the number of dimensions in the HOG feature vector, which helps speed up training. In practice, the HOG parameters should be varied with repeated classifier training and testing to identify the optimal parameter settings.
Train a Digit Classifier
Digit classification is a multiclass classification problem, where you have to classify an image into one out of the ten possible digit classes. In this example, the
fitcecoc
function from the Statistics and Machine Learning Toolbox™ is used to create a multiclass classifier using binary SVMs.
Start by extracting HOG features from the training set. These features will be used to train the classifier.
Next, train a classifier using the extracted features.
Evaluate the Digit Classifier
Evaluate the digit classifier using images from the test set, and generate a confusion matrix to quantify the classifier accuracy.
As in the training step, first extract HOG features from the test images. These features will be used to make predictions using the trained classifier.
The table shows the confusion matrix in percentage form. The columns of the matrix represent the predicted labels, while the rows represent the known labels. For this test set, digit 0 is often misclassified as 6, most likely due to their similar shapes. Similar errors are seen for 9 and 3. Training with a more representative data set like MNIST [2] or SVHN [3], which contain thousands of handwritten characters, is likely to produce a better classifier compared with the one created using this synthetic data set.
Summary
This example illustrated the basic procedure for creating a multiclass object classifier using the
extractHOGfeatures
function from the Computer Vision System Toolbox and the fitcecoc
function from the Statistics and Machine Learning Toolbox™. Although HOG features and an ECOC classifier were used here, other features and machine learning algorithms can be used in the same way. For instance, you can explore using different feature types for training the classifier; or you can see the effect of using other machine learning algorithms available in the Statistics and Machine Learning Toolbox™ such as k-nearest neighbors.References
[1] N. Dalal and B. Triggs, "Histograms of Oriented Gradients for Human Detection", Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 886-893, 2005.
[2] LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278-2324.
[3] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, A.Y. Ng, Reading Digits in Natural Images with Unsupervised Feature Learning NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011.
-----------------
http://www.eecs.wsu.edu/~schneidj/ufdtd/chap14.pdf
The Equivalence Principle
Recall the boundary conditions that pertain to the electric and magnetic fields tangential to an
interface:
nˆ
′ × (E1 − E2) = −Ms, (14.1)
nˆ
′ × (H1 − H2) = Js, (14.2)
where nˆ
′
is normal to the interface, pointing toward region 1. The subscript 1 indicates the fields
immediately adjacent to one side of the interface and the subscript 2 indicates the fields just on the
other side of the interface. The “interface” can either be a physical boundary between two media
or a fictitious boundary with the same medium to either side. The current Ms is a magnetic surface
current, i.e., a current that only flows tangential to the interface. In practice there is no magnetic
charge and thus no magnetic current. Therefore (14.1) states that the tangential components of E
must be continuous across the boundary. However, in theory, we can imagine a scenario where the
tangential fields are discontinuous. If this were the case, the magnetic current Ms must be nonzero
to account for this discontinuity
No comments:
Post a Comment