This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
assignments:assignment3 [2013/10/09 12:19] asa |
assignments:assignment3 [2015/10/02 09:42] asa |
||
---|---|---|---|
Line 5: | Line 5: | ||
===== Part 1: SVM with no bias term ===== | ===== Part 1: SVM with no bias term ===== | ||
- | Formulate a soft-margin SVM without the bias term, i.e. $f(\mathbf{x}) = \mathbf{w}^{T} \mathbf{x}$. | + | Formulate a soft-margin SVM without the bias term, i.e. one where the discriminant function is equal to $\mathbf{w}^{T} \mathbf{x}$. |
Derive the saddle point conditions, KKT conditions and the dual. | Derive the saddle point conditions, KKT conditions and the dual. | ||
Compare it to the standard SVM formulation. | Compare it to the standard SVM formulation. | ||
- | What is the implication of the difference on the design of SMO-like algorithms? | + | As we discussed in class, SMO-type algorithms for the dual optimize the smallest number of variables at a time, which is two variables. |
- | Recall that SMO algorithms work by iteratively optimizing two variables at a time. | + | Is this still the case for the formulation you have derived? |
Hint: consider the difference in the constraints. | Hint: consider the difference in the constraints. | ||
- | ===== Part 2: Closest Centroid Algorithm ===== | + | ===== Part 2: Soft-margin SVM for separable data ===== |
- | + | ||
- | Express the closest centroid algorithm in terms of kernels, i.e. determine how the coefficients $\alpha_i$ will be computed using a given labeled dataset. | + | |
- | + | ||
- | ===== Part 3: Soft-margin SVM for separable data ===== | + | |
Consider training a soft-margin SVM | Consider training a soft-margin SVM | ||
- | with $C$ set to some positive constant. Suppose the training data is linearly separable. | + | with the soft margin constant $C$ set to some positive constant. Suppose the training data is linearly separable. |
Since increasing the $\xi_i$ can only increase the objective of the primal problem (which | Since increasing the $\xi_i$ can only increase the objective of the primal problem (which | ||
we are trying to minimize), at the optimal solution to the primal problem, all the | we are trying to minimize), at the optimal solution to the primal problem, all the | ||
Line 25: | Line 21: | ||
to zero. True or false? Explain! | to zero. True or false? Explain! | ||
Given a linearly separable dataset, is it necessarily better to use a | Given a linearly separable dataset, is it necessarily better to use a | ||
- | a hard margin SVM over a soft-margin SVM? | + | a hard margin SVM over a soft-margin SVM? Explain! |
- | ===== Part 4: Using SVMs ===== | + | ===== Part 3: Using SVMs ===== |
The data for this question comes from a database called SCOP (structural | The data for this question comes from a database called SCOP (structural | ||
Line 35: | Line 31: | ||
problem | problem | ||
of distinguishing a particular class of proteins from a selection of | of distinguishing a particular class of proteins from a selection of | ||
- | examples sampled from the rest of the SCOP database. | + | examples sampled from the rest of the SCOP database |
+ | using features derived from their sequence (note that a protein is an arbitrary length sequence over the alphabet of the 20 amino acids). | ||
I chose to represent the proteins in | I chose to represent the proteins in | ||
terms of their motif composition. A sequence motif is a | terms of their motif composition. A sequence motif is a | ||
- | pattern of nucleotides/amino acids that is conserved in evolution. | + | pattern of amino acids that is conserved in evolution. |
Motifs are usually associated with regions of the protein that are | Motifs are usually associated with regions of the protein that are | ||
important for its function, and are therefore useful in differentiating between classes of proteins. | important for its function, and are therefore useful in differentiating between classes of proteins. | ||
Line 50: | Line 47: | ||
In this part of the assignment we will explore the dependence of classifier accuracy on | In this part of the assignment we will explore the dependence of classifier accuracy on | ||
- | the kernel, kernel parameters, kernel normalization, and SVM parameter soft-margin parameter. | + | the kernel, kernel parameters, kernel normalization, and the SVM soft-margin parameter. |
- | The use of the SVM class is discussed in the PyML [[http://pyml.sourceforge.net/tutorial.html#svms|tutorial]], and by using help(SVM) in the python interpreter. | + | In your implementation you can use the scikit-learn [[http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html | svm]] class. |
- | + | ||
- | By default, a dataset is instantiated with a linear kernel attached to it. | + | |
- | To use a different kernel you need to attach a new kernel to the dataset: | + | |
- | <code python> | + | |
- | >>> from PyML import ker | + | |
- | >>> data.attachKernel(ker.Gaussian(gamma = 0.1)) | + | |
- | </code> | + | |
- | or | + | |
- | <code python> | + | |
- | >>> from PyML import her | + | |
- | >>> data.attachKernel(ker.Polynomial(degree = 3)) | + | |
- | </code> | + | |
- | Alternatively, you can instantiate an SVM with a given kernel: | + | |
- | <code python> | + | |
- | >>> classifier = SVM(ker.Gaussian(gamma = 0.1)) | + | |
- | </code> | + | |
- | This will override the kernel the data is associated with. | + | |
In this question we will consider both the Gaussian and polynomial kernels: | In this question we will consider both the Gaussian and polynomial kernels: | ||
Line 75: | Line 55: | ||
$$ | $$ | ||
$$ | $$ | ||
- | K_{poly}(\mathbf{x}, \mathbf{x'}) = (1 + \mathbf{x}^T \mathbf{x}') ^{p} | + | K_{poly}(\mathbf{x}, \mathbf{x'}) = (\mathbf{x}^T \mathbf{x}' + 1) ^{p} |
$$ | $$ | ||
- | Plot the accuracy of the SVM, measured using the balanced success rate | + | |
+ | Plot the accuracy of the SVM, measured using the area under the ROC curve | ||
as a function of both the soft-margin parameter of the SVM, and the free parameter | as a function of both the soft-margin parameter of the SVM, and the free parameter | ||
of the kernel function. | of the kernel function. | ||
+ | Accuracy should be measured in five-fold cross-validation. | ||
Show a couple of representative cross sections of this plot for a given value | Show a couple of representative cross sections of this plot for a given value | ||
- | of the ridge parameter, and for a given value of the kernel parameter. | + | of the soft margin parameter, and for a given value of the kernel parameter. |
Comment on the results. When exploring the values of a continuous | Comment on the results. When exploring the values of a continuous | ||
classifier/kernel parameter it is | classifier/kernel parameter it is | ||
Line 88: | Line 70: | ||
polynomial kernel is not such a parameter). | polynomial kernel is not such a parameter). | ||
- | For this type of sparse dataset it is useful to normalize the input features. | + | Next, we will compare the accuracy of an SVM with a Gaussian kernel on the raw data with accuracy obtained when the data is normalized to be unit vectors (the values of the features of each example are divided by its norm). |
- | One way to do so is to divide each input example by its norm. This is | + | This is different than standardization which operates at the level of individual features. Normalizing to unit vectors is more appropriate for this dataset as it is sparse, i.e. most of the features are zero. |
- | accomplished in PyML by: | + | Perform your comparison by comparing the accuracy measured by the area under the ROC curve in five-fold cross validation. |
- | <code python> | + | The optimal values of kernel parameters should be measured by cross-validation, where the optimal SVM/kernel parameters are chosen using grid search on the training set of each fold. |
- | >>> data.normalize() | + | Use the scikit-learn [[http://scikit-learn.org/stable/tutorial/statistical_inference/model_selection.html |
- | </code> | + | | grid-search]] class for model selection. |
- | Compare the results under this normalization with what you obtain | + | |
- | without normalization. | + | |
- | You can visualize the whole kernel matrix associated with the data using the following commands: | + | Finally, visualize the kernel matrix associated with the dataset. |
- | <code python> | + | |
- | >>> from PyML import ker | + | |
- | >>> ker.showKernel(data) | + | |
- | </code> | + | |
Explain the structure that you are seeing in the plot (it is more | Explain the structure that you are seeing in the plot (it is more | ||
interesting when the data is normalized). | interesting when the data is normalized). | ||
Line 107: | Line 84: | ||
===== Submission ===== | ===== Submission ===== | ||
- | Submit your report via RamCT. Python code can be displayed in your report if it is succinct (not more than a page or two at the most) or submitted separately. The latex sample document shows how to display Python code in a latex document. | + | Submit your report via C. Python code can be displayed in your report if it is succinct (not more than a page or two at the most) or submitted separately. The latex sample document shows how to display Python code in a latex document. |
Also, please check-in a text file named README that describes what you found most difficult in completing this assignment (or provide that as a comment on ramct). | Also, please check-in a text file named README that describes what you found most difficult in completing this assignment (or provide that as a comment on ramct). | ||