Warning: Declaration of action_plugin_tablewidth::register(&$controller) should be compatible with DokuWiki_Action_Plugin::register(Doku_Event_Handler $controller) in /s/bach/b/class/cs545/public_html/fall16/lib/plugins/tablewidth/action.php on line 93
assignments:assignment3 [CS545 fall 2016]

User Tools

Site Tools


assignments:assignment3

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
assignments:assignment3 [2016/09/15 12:53]
asa
assignments:assignment3 [2016/09/15 14:49]
asa [Submission]
Line 1: Line 1:
-====== Assignment 2 ======+~~NOTOC~~
  
-**Due:** 10/at 11:59pm.+====== Assignment 3 ====== 
 + 
 +**Due:** 10/at 11pm. 
 + 
 +===== Preliminaries =====
  
 In this assignment you will explore ridge regression applied to the task of predicting wine quality. In this assignment you will explore ridge regression applied to the task of predicting wine quality.
 You will use the [[http://​archive.ics.uci.edu/​ml/​datasets/​Wine+Quality | wine quality]] dataset from the UCI machine learning repository, and compare accuracy obtained using ridge regression to the results from a [[http://​www.sciencedirect.com/​science/​article/​pii/​S0167923609001377#​ | recent publication]] (if you have trouble accessing that version of the paper, here's a link to a [[http://​www3.dsi.uminho.pt/​pcortez/​wine5.pdf| preprint]]. You will use the [[http://​archive.ics.uci.edu/​ml/​datasets/​Wine+Quality | wine quality]] dataset from the UCI machine learning repository, and compare accuracy obtained using ridge regression to the results from a [[http://​www.sciencedirect.com/​science/​article/​pii/​S0167923609001377#​ | recent publication]] (if you have trouble accessing that version of the paper, here's a link to a [[http://​www3.dsi.uminho.pt/​pcortez/​wine5.pdf| preprint]].
-The wine data is composed of two datasets - one for white wines, and one for reds.  ​Perform ​all your analyses on both.+The wine data is composed of two datasets - one for white wines, and one for reds.  ​In this assignment perform ​all your analyses on just the red wine data. 
 + 
 +The features for the wine dataset are not standardized,​ so make sure you do this, especially since we are going to consider the magnitude of the weight vector (recall that standardization entails subtracting the mean and then dividing by the standard deviation for each feature; you can use the [[http://​docs.scipy.org/​doc/​numpy/​reference/​routines.statistics.html | Numpy statistics module]] to perform the required calculations). ​  
 +==== Part 1 ====
  
-=== Part 1 === +Implement ridge regression ​keeping the same API you used in implementing the classifiers in assignment 2, and functions for computing the following measures of error:
-Implement ridge regression and functions for computing the following measures of error:+
  
   * The Root Mean Square Error (RMSE).   * The Root Mean Square Error (RMSE).
Line 22: Line 28:
 $$MAD(h) = \frac{1}{N}\sum_{i=1}^N |y_i - h(\mathbf{x}_i)|.$$ $$MAD(h) = \frac{1}{N}\sum_{i=1}^N |y_i - h(\mathbf{x}_i)|.$$
  
-Compute these measures ​of error for ridge regression applied +With the code you just implemented,​ your next task is to explore the dependence ​of error on the value of the regularization parameter, $\lambda$
-to the wine dataset over a range of the regularization parameter, $\lambda$ ​(choose ​values on a logarithmic scale, e.g. 0.01, 0.1, 1, 10, 100, 1000and plot the results (use a fixed test set for computing them!) +In what follows set aside 30% of the data as a test-set, and compute the in-sample error, and the test-set error as a function of the parameter $\lambda$ on the red wine data.  Choose the values ​of $\lambda$ ​on a logarithmic scale with values ​0.01, 0.1, 1, 10, 100, 1000 and plot the RMSE only. 
-The features for the wine dataset are not standardized,​ so make sure you do this, especially since we are going to consider ​the magnitude ​of the weight vector (recall that standardization entails subtracting the mean and then dividing by the standard deviation for each feature; you can use the [[http://​docs.scipy.org/​doc/​numpy/​reference/​routines.statistics.html | Numpy statistics module]] to perform the required calculations). ​ What is the potential advantage of MAD over RMSE?+Repeat ​the same experiment where instead ​of using all the training data, choose 20 random training examples.
  
-In addition to RMSE and MAD, plot the Regression Error Characteristic (REC) curve of a representative classifier. ​  +Now answer ​the following:
-REC curves are described in the following ​[[http://​machinelearning.wustl.edu/​mlpapers/​paper_files/​icml2003_BiB03.pdf|paper]]. +
-What can you learn from this curve that you cannot learn from RMSE or MAD?  ​+
  
-Compare ​the results that you are getting with the published results in the paper.+  * What is the optimal value of $\lambda$?​ 
 +  * What observations can you make on the basis of these plots? ​ (The concepts of overfitting/​underfitting should be addressed in your answer). 
 +  * Finally, compare ​the results that you are getting with the published results in the paper linked above In particular, is the performance you have obtained is comparable to that observed in the paper?
  
-=== Part 2 ===+==== Part 2 ==== 
 + 
 +Regression Error Characteristic (REC) curves are an interesting way of visualizing regression error as described 
 +in the following [[http://​machinelearning.wustl.edu/​mlpapers/​paper_files/​icml2003_BiB03.pdf|paper]]. 
 +Write a function that plots the REC curve of a regression method, and plot the REC curve of the best regressor you found in Part 1 of the assignment. 
 +What can you learn from this curve that you cannot learn from an error measure such as RMSE or MAD? 
 + 
 + 
 +==== Part 3 ====
  
 As we discussed in class, the magnitude of the weight vector can be interpreted as a measure of feature importance. As we discussed in class, the magnitude of the weight vector can be interpreted as a measure of feature importance.
Line 44: Line 58:
 Next, perform the following experiment: Next, perform the following experiment:
 Incrementally remove the feature with the lowest absolute value of the weight vector and retrain the ridge regression classifier. Incrementally remove the feature with the lowest absolute value of the weight vector and retrain the ridge regression classifier.
-Plot RMSE and MAD as a function of the number of features that remain on the test set which you have set aside.+Plot RMSE as a function of the number of features that remain on the test set which you have set aside.
  
 ===== Submission ===== ===== Submission =====
  
-Submit your report via Canvas. ​ Python code can be displayed in your report if it is succinct (not more than a page or two at the most) or submitted separately The latex sample document shows how to display Python code in a latex document.  ​Code needs to be there so we can make sure that you implemented ​the algorithms and data analysis methodology correctly Canvas allows you to submit multiple files for an assignment, so DO NOT submit an archive file (tar, zip, etc).+Submit your report via Canvas. ​ Python code can be displayed in your report if it is short, and helps understand what you have done. The sample ​LaTex document ​provided in assignment 1 shows how to display Python code.  ​Submit the Python code that was used to generate the results as a file called ''​assignment3.py''​ (you can split the code into several ​.py files; ​Canvas allows you to submit multiple files). ​ ​Typing  
 + 
 +<​code>​ 
 +$ python assignment3.py 
 +</​code>​ 
 +should generate all the tables/​plots used in your report. ​  
 + 
  
 ===== Grading ===== ===== Grading =====
Line 77: Line 98:
               Grammar, spelling, and punctuation.               Grammar, spelling, and punctuation.
 </​code>​ </​code>​
 +
assignments/assignment3.txt · Last modified: 2016/09/20 09:34 by asa