Warning: Declaration of action_plugin_tablewidth::register(&$controller) should be compatible with DokuWiki_Action_Plugin::register(Doku_Event_Handler $controller) in /s/bach/b/class/cs545/public_html/fall16/lib/plugins/tablewidth/action.php on line 93
assignments:assignment3 [CS545 fall 2016]

User Tools

Site Tools


assignments:assignment3

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
assignments:assignment3 [2013/10/06 15:23]
asa
assignments:assignment3 [2016/09/20 09:34]
asa [Part 1]
Line 1: Line 1:
-========= Assignment 3: Support Vector Machines ============+~~NOTOC~~
  
-===== Part 1:  SVM with no bias term =====+====== Assignment 3 ======
  
-Formulate a soft-margin SVM without the bias term, i.e. $f(\mathbf{x}) = \mathbf{w}^{T} \mathbf{x}$. +**Due:** 10/3 at 11:59pm.
-Derive the saddle point conditions, KKT conditions and the dual. +
-Compare it to the standard SVM formulation. +
-What is the implication of the difference on the design of SMO-like algorithms?​ +
-Recall that SMO algorithms work by iteratively optimizing two variables ​at a time. +
-Hint ​consider the difference in the constraints.+
  
-===== Part 2:  Closest Centroid Algorithm ​=====+===== Preliminaries ​=====
  
-Express ​the closest centroid algorithm in terms of kernels, i.edetermine how the coefficients $\alpha_i$ will be computed ​using a given labeled dataset.+In this assignment you will explore ridge regression applied to the task of predicting wine quality. 
 +You will use the [[http://​archive.ics.uci.edu/​ml/​datasets/​Wine+Quality | wine quality]] dataset from the UCI machine learning repository, and compare accuracy obtained ​using ridge regression to the results from [[http://​www.sciencedirect.com/​science/​article/​pii/​S0167923609001377#​ | recent publication]] (if you have trouble accessing that version of the paper, here's a link to a [[http://​www3.dsi.uminho.pt/​pcortez/​wine5.pdf| preprint]]. 
 +The wine data is composed of two datasets - one for white wines, and one for reds.  In this assignment perform all your analyses on just the red wine data.
  
-===== Part 3:  Soft-margin ​for separable data =====+The features ​for the wine dataset are not standardized,​ so make sure you do this, especially since we are going to consider the magnitude of the weight vector (recall that standardization entails subtracting the mean and then dividing by the standard deviation for each feature; you can use the [[http://​docs.scipy.org/​doc/​numpy/​reference/​routines.statistics.html | Numpy statistics module]] to perform the required calculations).
  
-Consider training a soft-margin SVM  +==== Part 1 ====
-with $C$ set to some positive constant. Suppose the training data is linearly separable. +
-Since increasing the $\xi_i$ can only increase the objective of the primal problem (which +
-we are trying to minimize), at the optimal solution to the primal problem, all the +
-training examples will have $\xi_i$ equal +
-to zero. True or false? ​ Explain! +
-Given a linearly separable dataset, is it necessarily better to use a +
-a hard margin SVM over a soft-margin SVM?+
  
-===== Part 4:  Using SVMs =====+Implement ridge regression in a class called RidgeRegression that implements the classifier API, i.e. ''​fit''​ and ''​predict''​ methods with the same signature as the classifiers you implemented in the previous assignment. ​ Also implement functions for computing the following measures of error:
  
-The data for this question comes from a database called SCOP (structural +  * The Root Mean Square Error (RMSE). 
-classification of proteins), which classifies proteins into classes +  * The Maximum Absolute Deviation ​(MAD).
-according to their structure ​(download it from {{assignments:​scop_motif.data|here}}).   +
-The data is a two-class classification +
-problem +
-of distinguishing a particular class of proteins from a selection of +
-examples sampled from the rest of the SCOP database. +
-I chose to represent the proteins in +
-terms of their motif composition. ​ A sequence motif is a +
-pattern of nucleotides/​amino acids that is conserved in evolution. +
-Motifs are usually associated with regions of the protein that are +
-important for its function, and are therefore useful in differentiating between classes of proteins. +
-A given protein will typically contain only a handful of motifs, and +
-so the data is very sparse. ​ It is also very high dimensional,​ since +
-the number of conserved patterns in the space of all proteins is +
-large. +
-The data was constructed as part of the following analysis of detecting distant relationships between proteins:+
  
-  * A. Ben-Hur and D. Brutlag. [[http://​bioinformatics.oxfordjournals.org/​content/​19/​suppl_1/​i26.abstract | Remote homology detection: a motif based approach]]. In: Proceedings,​ eleventh international conference on intelligent systems for molecular biology. Bioinformatics 19(Suppl. 1): i26-i33, 2003. 
  
-In this part of the assignment we will explore the dependence of classifier accuracy on  +For a hypothesis $h$they are defined as follows:
-the kernelkernel parameters, kernel normalization,​ and SVM parameter soft-margin parameter. +
-The use of the SVM class is discussed in the PyML [[http://​pyml.sourceforge.net/​tutorial.html#​svms|tutorial]],​ and by using help(SVM) in the python interpreter.+
  
-By default, a dataset is instantiated with a linear kernel attached to it. +$$RMSE(h) \sqrt{\frac{1}{N}\sum_{i=1}^N (y_i - h(\mathbf{x}_i))^2}$$
-To use a different kernel you need to attach a new kernel to the dataset: +
-<code python>​ +
->>>​ from PyML import ker +
->>>​ data.attachKernel(ker.Gaussian(gamma ​0.1)) +
-</​code>​ +
-or +
-<code python>​ +
->>>​ from PyML import her +
->>>​ data.attachKernel(ker.Polynomial(degree ​3)) +
-</​code>​ +
-Alternatively,​ you can instantiate an SVM with a given kernel: +
-<code python>​ +
->>>​ classifier = SVM(ker.Gaussian(gamma = 0.1)) +
-</​code>​ +
-This will override the kernel the data is associated with.+
  
-In this question we will consider both the Gaussian ​and polynomial kernels: +and
-$$ +
-K_{gauss}(\mathbf{x},​ \mathbf{x'​}) = \exp(-\gamma || \mathbf{x} - \mathbf{x}'​ ||^2) +
-$$ +
-$$ +
-K_{poly}(\mathbf{x},​ \mathbf{x'​}) = (1 + \mathbf{x}^T \mathbf{x}'​) ^{p} +
-$$ +
-Plot the accuracy of the SVM, measured using the balnced success rate +
-as a function of both the soft-margin parameter of the SVM, and the free parameter +
-of the kernel function. +
-Show a couple of representative cross sections of this plot for a given value +
-of the ridge parameter, and for a given value of the kernel parameter. +
-Comment on the results. ​ When exploring the values of a continuous +
-classifier/​kernel parameter it is +
-useful to use values that are distributed on an exponential grid, +
-i.e. something like 0.01, 0.1, 1, 10, 100 (note that the degree of the +
-polynomial kernel is not such a parameter).+
  
-For this type of sparse ​dataset ​it is useful ​to normalize ​the input features. +$$MAD(h) = \frac{1}{N}\sum_{i=1}^N |y_i - h(\mathbf{x}_i)|.$$ 
-One way to do so is to divide each input example by its norm.  ​This is + 
-accomplished in PyML by: +With the code you just implemented,​ your next task is to explore the dependence of error on the value of the regularization parameter, $\lambda$. 
-<​code ​python+In what follows set aside 30% of the data as a validation-set,​ and compute the in-sample error, and the validation-set error as a function of the parameter $\lambda$ on the red wine data.  Choose the values of $\lambda$ on a logarithmic scale with values 0.01, 0.1, 1, 10, 100, 1000 and plot the RMSE. 
->>>​ data.normalize()+Repeat the same experiment where instead of using all the training data, choose 20 random examples out of the training set, and train your model using those 20 examples, while evaluating on the same validation set. 
 + 
 +Now answer the following:​ 
 + 
 +  * What is the optimal value of $\lambda$?​ 
 +  * What observations can you make on the basis of these plots? ​ (The concepts of overfitting/​underfitting should be addressed in your answer). 
 +  * Finally, compare the results that you are getting with the published results in the paper linked above. ​ In particular, is the performance you have obtained is comparable to that observed in the paper? 
 + 
 +==== Part 2 ==== 
 + 
 +Regression Error Characteristic (REC) curves are an interesting way of visualizing regression error as described 
 +in the following [[http://​machinelearning.wustl.edu/​mlpapers/​paper_files/​icml2003_BiB03.pdf|paper]]. 
 +Write a function that plots the REC curve of a regression method, and plot the REC curve of the best regressor you found in Part 1 of the assignment (i.e. the one that gave the lowest error on the validation set).  Plot the REC curve for both the validation set and the training set. 
 +What can you learn from this curve that you cannot learn from an error measure such as RMSE or MAD? 
 + 
 + 
 +==== Part 3 ==== 
 + 
 +As we discussed in class, the magnitude ​of the weight vector can be interpreted as a measure of feature importance. 
 +Train a ridge regression classifier on a subset of the dataset ​that you reserved for training. 
 +We will explore the relationship between the magnitude of weight vector components and their relevance to the classification task in several ways. 
 +Each feature ​is associated with a component of the weight vector. ​ It can also be associated with the correlation of that feature with the vector of labels. 
 +As we discussed in class, the magnitude of the weight vector can give an indication of feature relevance; another measure of relevance of a feature is its correlation with the labels. ​ To compare the two,  
 +create a scatter plot of weight vector components against the [[https://​en.wikipedia.org/​wiki/​Pearson_product-moment_correlation_coefficient | Pearson correlation coefficient]] of the corresponding feature with the labels (again, you can use the [[http://​docs.scipy.org/​doc/​numpy/​reference/​routines.statistics.html | Numpy statistics module]] ​to compute it). 
 +What can you conclude from this plot? 
 +The paper ranks features according to their importance using a different approach. ​ Compare your results with what they obtain. 
 + 
 +Next, perform ​the following experiment:​ 
 +Incrementally remove the feature with the lowest absolute value of the weight vector and retrain the ridge regression classifier. 
 +Plot RMSE as a function of the number of features ​that remain on the test set which you have set aside and comment on the results
 + 
 +===== Submission ===== 
 + 
 +Submit your report via Canvas. ​ Python code can be displayed in your report if it is short, and helps understand what you have done. The sample LaTex document provided in assignment 1 shows how to display Python code.  ​Submit the Python code that was used to generate the results as a file called ''​assignment3.py''​ (you can split the code into several .py files; Canvas allows you to submit multiple files). ​ Typing ​ 
 + 
 +<​code>​ 
 +$ python assignment3.py
 </​code>​ </​code>​
-Compare ​the results under this normalization ​with what you obtain +should generate all the tables/​plots used in your report. ​  
-without normalization.+ 
 + 
 + 
 + 
 +===== Grading ===== 
 + 
 +A few general guidelines for this and future assignments in the course: 
 + 
 +  * Your answers should be concise and to the point. ​  
 +  * You need to use LaTex to write the report. 
 +  * The report is well structured, the writing is clear, ​with good grammar and correct spelling; good formatting of math, code, figures and captions (every figure and table needs to have a caption that explains ​what is being shown). 
 +  * Whenever ​you use information from the web or published papers, a reference should be provided. ​ Failure to do so is considered plagiarism. 
 + 
 +We will take off points if these guidelines are not followed. 
 + 
 +  * Always provide a description of the method you used to produce a given result in sufficient detail such that the reader can reproduce your results on the basis of the description. ​ You can use a few lines of python code or pseudo-code. 
 +  * You can provide results in the form of tables, figures or text - whatever form is most appropriate for a given problem. ​ There are no rules about how much space each answer should take.  BUT we will take off points if we have to wade through a lot of redundant data. 
 +  * In any machine learning paper there is a discussion of the results. ​ There is a similar expectation from your assignments that you reason about your results. ​ For example, for the learning curve problem, what can you say on the basis of the observed learning curve? 
 + 
 +<​code>​ 
 +Grading sheet for assignment 3 
 + 
 +Part 1:  50 points. 
 +(15 points): ​ Ridge regression is correctly implemented. 
 +(15 points): ​ Plots of RMSE as a function of lambda are generated correctly. 
 +(20 points): ​ Discussion of the results 
 + 
 +Part 2:  25 points. 
 +(15 points): ​ REC curves are generated correctly 
 +(10 points): ​ discussion of REC curves 
 + 
 +Part 3:  25 points. 
 +(20 points): ​ Weight vector analysis 
 +( 5 points): ​ Comparison to the published results
  
-You can visualize the whole kernel matrix associated with the data using the following commands: 
-<code python> 
->>>​ from PyML import ker 
->>>​ ker.showKernel(data) 
 </​code>​ </​code>​
-Explain the structure that you are seeing in the plot (it is more + 
-interesting when the data is normalized).+
assignments/assignment3.txt · Last modified: 2016/09/20 09:34 by asa