Warning: Declaration of action_plugin_tablewidth::register(&$controller) should be compatible with DokuWiki_Action_Plugin::register(Doku_Event_Handler $controller) in /s/bach/b/class/cs545/public_html/fall16/lib/plugins/tablewidth/action.php on line 93
assignments:assignment3 [CS545 fall 2016]

User Tools

Site Tools


assignments:assignment3

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
assignments:assignment3 [2013/10/04 21:08]
asa [Part 3: Using the SVM]
assignments:assignment3 [2015/10/09 13:10]
asa [Submission]
Line 1: Line 1:
 ========= Assignment 3: Support Vector Machines ============ ========= Assignment 3: Support Vector Machines ============
 +
 +Due:  October 16th at 11pm
  
 ===== Part 1:  SVM with no bias term ===== ===== Part 1:  SVM with no bias term =====
  
-Formulate a soft-margin SVM without the bias term, i.e. $f(\mathbf{x}) = \mathbf{w}^{T} \mathbf{x}$.+Formulate a soft-margin SVM without the bias term, i.e. one where the discriminant function is equal to $\mathbf{w}^{T} \mathbf{x}$.
 Derive the saddle point conditions, KKT conditions and the dual. Derive the saddle point conditions, KKT conditions and the dual.
-Compare it to the standard SVM formulation. +Compare it to the standard SVM formulation ​that was derived in class
-What is the implication of the difference on the design of SMO-like algorithms+In class we discussed ​SMO-type algorithms ​for optimizing ​the dual SVM.  At each step SMO optimizes ​two variables at a time, which is the smallest number possible. 
-Recall that SMO algorithms work by iteratively ​optimizing two variables at a time.+Is this still the case for the formulation you have derived? ​ In other words, is two the smallest number of variables that can be optimized at a time?
 Hint:  consider the difference in the constraints. Hint:  consider the difference in the constraints.
  
-Discuss the merit of the bias-less formulation as the dimensionality +===== Part 2:  Soft-margin ​SVM for separable data =====
-of the data (or the feature space) is varied. +
-When using this SVM formulation it may be useful to add a constant to the +
-kernel matrix. ​ Explain why this can be beneficial.+
  
 +Consider training a soft-margin SVM 
 +with the soft margin constant $C$ set to some positive constant. Suppose the training data is linearly separable.
 +Since increasing the $\xi_i$ can only increase the objective of the primal problem (which
 +we are trying to minimize), at the optimal solution to the primal problem, all the
 +training examples will have $\xi_i$ equal
 +to zero. True or false? ​ Explain!
 +Given a linearly separable dataset, is it necessarily better to use a
 +a hard margin SVM over a soft-margin SVM?  Explain!
  
-===== Part 3:  Using the SVM =====+===== Part 3:  Using SVMs =====
  
-Download the dataset associated with this assignment ​from the homework +The data for this question comes from a database called SCOP (structural 
-page of the course+classification of proteins), which classifies proteins into classes 
-In this assignment we will explore ​the dependence ​of classifier accuracy on  +according to their structure (download it from {{assignments:​scop_motif.data|here}}). ​  
-the kernel, kernel parameters, kernel normalization, and SVM parameter+The data is a two-class classification 
-The use of the SVM class is discussed ​in the PyML [[http://​pyml.sourceforge.net/​tutorial.html#​svms|tutorial]].+problem 
 +of distinguishing a particular class of proteins from a selection of 
 +examples sampled ​from the rest of the SCOP database 
 +using features derived from their sequence (a protein is a chain of amino acids, so as computer scientists, we can consider it as a sequence over the alphabet ​of the 20 amino acids)
 +I chose to represent ​the proteins in terms of their motif composition. ​ A sequence motif is a 
 +pattern of amino acids that is conserved in evolution. 
 +Motifs are usually associated with regions of the protein that are 
 +important for its function, and are therefore useful in differentiating between classes of proteins
 +A given protein will typically contain only a handful ​of motifs, and 
 +so the data is very sparse. 
 +Therefore, only the non-zero elements of the data are represented. 
 +Each line in the file describes a single example ​Here'​s an example from the file:
  
-By default a dataset is instantiated with a linear kernel attached to it. +<​code>​ 
-To use a different kernel you need to attach a new kernel to the dataset: +d1scta_,a.1.1.2 31417:1.0 32645:1.0 39208:1.0 42164:1.0 ....
-<​code ​python+
->>>​ from PyML import ker +
->>>​ data.attachKernel(ker.Gaussian(gamma = 0.1)) +
-</​code>​ +
-or +
-<code python>​ +
->>>​ from PyML import her +
->>>​ data.attachKernel(ker.Polynomial(degree = 3))+
 </​code>​ </​code>​
 +The first column is the ID of the protein, the second is the class it belongs to (the values for the class variable are ''​a.1.1.2'',​ which is the given class of proteins, and ''​rest''​ which is the negative class representing the rest of the database); the remainder consists of elements of the form ''​feature_id:​value''​which provide an id of a feature and the value associated with it.
 +This is an extension of the format used by LibSVM, that scikit-learn can read.
 +See a discussion of this format and how to read it [[http://​scikit-learn.org/​stable/​datasets/#​datasets-in-svmlight-libsvm-format | here]].
 +
 +We note that the data is very high dimensional since
 +the number of conserved patterns in the space of all proteins is
 +large.
 +The data was constructed as part of the following analysis of detecting distant relationships between proteins:
 +
 +  * A. Ben-Hur and D. Brutlag. [[http://​bioinformatics.oxfordjournals.org/​content/​19/​suppl_1/​i26.abstract | Remote homology detection: a motif based approach]]. In: Proceedings,​ eleventh international conference on intelligent systems for molecular biology. Bioinformatics 19(Suppl. 1): i26-i33, 2003.
 +
 +In this part of the assignment we will explore the dependence of classifier accuracy on 
 +the kernel, kernel parameters, kernel normalization,​ and the SVM soft-margin parameter.
 +In your implementation you can use the scikit-learn [[http://​scikit-learn.org/​stable/​modules/​generated/​sklearn.svm.SVC.html | svm]] class.
 +
 In this question we will consider both the Gaussian and polynomial kernels: In this question we will consider both the Gaussian and polynomial kernels:
 $$ $$
-K_{gaus}(\mathbf{x},​ \mathbf{x'​}) = \exp(-\gamma || \mathbf{x} - \mathbf{x}'​ ||^2)+K_{gauss}(\mathbf{x},​ \mathbf{x'​}) = \exp(-\gamma || \mathbf{x} - \mathbf{x}'​ ||^2)
 $$ $$
 +and
 $$ $$
-K_{poly}(\mathbf{x},​ \mathbf{x'​}) = (1 + \mathbf{x}^T \mathbf{x}'​) ^{p}+K_{poly}(\mathbf{x},​ \mathbf{x'​}) = (\mathbf{x}^T \mathbf{x}' ​+ 1) ^{p}.
 $$ $$
-Plot the accuracy of the classifier, measured using the success rate and the area under the ROC curve + 
-as a function of both the ridge parameter of the classifier, and the free parameter+Plot the accuracy of the SVM, measured using the area under the ROC curve 
 +as a function of both the soft-margin ​parameter of the SVM, and the free parameter
 of the kernel function. of the kernel function.
 +Accuracy should be measured in five-fold cross-validation.
 Show a couple of representative cross sections of this plot for a given value Show a couple of representative cross sections of this plot for a given value
-of the ridge parameter, and for a given value of the kernel parameter.+of the soft margin ​parameter, and for a given value of the kernel parameter.
 Comment on the results. ​ When exploring the values of a continuous Comment on the results. ​ When exploring the values of a continuous
 classifier/​kernel parameter it is classifier/​kernel parameter it is
-useful use values that are distributed on an exponential grid,+useful ​to use values that are distributed on an exponential grid,
 i.e. something like 0.01, 0.1, 1, 10, 100 (note that the degree of the i.e. something like 0.01, 0.1, 1, 10, 100 (note that the degree of the
 polynomial kernel is not such a parameter). polynomial kernel is not such a parameter).
  
-The data for this question comes from a database called SCOP (structural +Next, we will compare the accuracy of an SVM with a Gaussian kernel on the raw data with accuracy obtained when the data is normalized to be unit vectors (the values ​of the features ​of each example are divided by its norm)
-classification of proteins), which classifies proteins into classes +This is different than standardization which operates at the level of individual features.  ​Normalizing to unit vectors ​is more appropriate for this dataset as it is sparse, i.e. most of the features ​are zero. 
-according to their structure. ​ The data is a two-class classification +Perform your comparison by comparing the accuracy measured by the area under the ROC curve in five-fold cross validation
-problem +The optimal values ​of kernel parameters should be measured by cross-validationwhere the optimal SVM/kernel parameters are chosen using grid search on the training set of each fold
-of distinguishing a particular class of proteins from a selection of +Use the scikit-learn [[http://​scikit-learn.org/​stable/​tutorial/​statistical_inference/​model_selection.html 
-examples sampled from the rest of the SCOP database+ | grid-search]] class for model selection.
-I chose to represent ​the proteins in +
-terms of their motif composition.  ​A sequence motif is +
-pattern of nucleotides/​amino acids that is conserved in evolution. +
-Motifs are usually associated with regions ​of the protein that are +
-important for its function, and are therefore useful ​in predicting protein +
-function+
-A given protein will typically contain only a handful ​of motifsand +
-so the data is very sparse. ​ It is also very high dimensional,​ since +
-the number ​of conserved patterns in the space of all proteins is +
-large+
-More information about motifs and their usefulness in classifying +
-proteins can be found in the following paper:+
  
-  * A. Ben-Hur and D. Brutlag. Protein sequence motifs: Highly predictive features of protein function. In: Feature extraction, foundations and applications. I. Guyon, S. Gunn, M. Nikravesh, and L. Zadeh (eds.) Springer Verlag, 2006. 
- 
-For this type of sparse dataset it is useful to normalize the input features before 
-training and testing your classifier. 
-One way to do so is to divide each input example by its norm.  This is 
-accomplished in PyML by: 
-<code python> 
->>>​ data.normalize() 
-</​code>​ 
-Compare the results under this normalization with what you obtain 
-without normalization. 
  
-You can visualize the whole kernel matrix associated with the data using the following commands: +Finally, ​visualize the kernel matrix associated with the dataset.
-<code python>​ +
->>>​ from PyML import ker +
->>>​ ker.showKernel(data) +
-</​code>​+
 Explain the structure that you are seeing in the plot (it is more Explain the structure that you are seeing in the plot (it is more
 interesting when the data is normalized). interesting when the data is normalized).
  
 +===== Submission =====
 +
 +Submit the pdf of your report via Canvas. ​ Python code can be displayed in your report if it is succinct (not more than a page or two at the most) or submitted separately. ​ The latex sample document shows how to display Python code in a latex document. ​ Code needs to be there so we can make sure that you implemented the algorithms and data analysis methodology correctly. ​ Canvas allows you to submit multiple files for an assignment, so DO NOT submit an archive file (tar, zip, etc).  Canvas will only allow you to submit pdfs (.pdf extension) or python code (.py extension).
 +For this assignment there is a strict 8 page limit (not including references and code that is provided as an appendix). ​ We will take off points for reports that go over the page limit.
 +In addition to the code snippets that you include in your report, make sure you provide complete code from which we can see exactly how your results were generated.
 +
 +===== Grading =====
 +
 +A few general guidelines for this and future assignments in the course:
 +
 +  * Always provide a description of the method you used to produce a given result in sufficient detail such that the reader can reproduce your results on the basis of the description (UNLESS the method has been provided in class or is there in the book). ​ Your code needs to be provided in sufficient detail so we can make sure that your implementation is correct. ​ The saying that "the devil is in the details"​ holds true for machine learning, and is sometimes the makes the difference between correct and incorrect results. ​ If your code is more than a few lines, you can include it as an appendix to your report, or submit it as a separate file.  Make sure your code is readable!
 +  * You can provide results in the form of tables, figures or text - whatever form is most appropriate for a given problem.
 +  * In any machine learning paper there is a discussion of the results. ​ There is a similar expectation from your assignments that you reason about your results. ​ For example, for the learning curve problem, what can you say on the basis of the observed learning curve?
 +  * Write succinct answers. ​ We will take off points for rambling answers that are not to the point, and and similarly, if we have to wade through a lot of data/​results that are not to the point.
 +
 +<​code>​
 +Grading sheet for assignment 2
 +
 +Part 1:  40 points.
 +(10 points): ​ Primal SVM formulation is correct
 +( 7 points): ​ Lagrangian found correctly
 +( 8 points): ​ Derivation of saddle point equations
 +(10 points): ​ Derivation of the dual
 +( 5 points): ​ Discussion of the implication of the form of the dual for SMO-like algorithms
 +
 +Part 2:  10 points.
 +
 +Part 3:  40 points.
 +(20 points): ​ Accuracy as a function of parameters and discussion of the results
 +(15 points): ​ Comparison of normalized and non-normalized kernels and correct model selection
 +( 5 points): ​ Visualization of the kernel matrix and observations made about it
 +
 +Report structure, grammar and spelling: ​ 10 points
 +(10 points): ​ Heading and subheading structure easy to follow and clearly divides report into logical sections.  ​
 +              Code, math, figure captions, and all other aspects of the report are well-written and formatted.
 +              Grammar, spelling, and punctuation.
 +</​code>​
assignments/assignment3.txt · Last modified: 2016/09/20 09:34 by asa