Warning: Declaration of action_plugin_tablewidth::register(&$controller) should be compatible with DokuWiki_Action_Plugin::register(Doku_Event_Handler $controller) in /s/bach/b/class/cs545/public_html/fall16/lib/plugins/tablewidth/action.php on line 93
assignments:assignment3 [CS545 fall 2016]

User Tools

Site Tools


assignments:assignment3

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
assignments:assignment3 [2013/10/06 13:33]
asa
assignments:assignment3 [2013/10/06 20:54]
asa
Line 1: Line 1:
 ========= Assignment 3: Support Vector Machines ============ ========= Assignment 3: Support Vector Machines ============
 +
 +Due:  October 20th at 6pm
  
 ===== Part 1:  SVM with no bias term ===== ===== Part 1:  SVM with no bias term =====
Line 14: Line 16:
 Express the closest centroid algorithm in terms of kernels, i.e. determine how the coefficients $\alpha_i$ will be computed using a given labeled dataset. Express the closest centroid algorithm in terms of kernels, i.e. determine how the coefficients $\alpha_i$ will be computed using a given labeled dataset.
  
-===== Part 3:  Using SVMs =====+===== Part 3:  Soft-margin SVM for separable data ===== 
 + 
 +Consider training a soft-margin SVM  
 +with $C$ set to some positive constant. Suppose the training data is linearly separable. 
 +Since increasing the $\xi_i$ can only increase the objective of the primal problem (which 
 +we are trying to minimize), at the optimal solution to the primal problem, all the 
 +training examples will have $\xi_i$ equal 
 +to zero. True or false? ​ Explain! 
 +Given a linearly separable dataset, is it necessarily better to use a 
 +a hard margin SVM over a soft-margin SVM? 
 + 
 +===== Part 4:  Using SVMs =====
  
 The data for this question comes from a database called SCOP (structural The data for this question comes from a database called SCOP (structural
Line 64: Line 77:
 K_{poly}(\mathbf{x},​ \mathbf{x'​}) = (1 + \mathbf{x}^T \mathbf{x}'​) ^{p} K_{poly}(\mathbf{x},​ \mathbf{x'​}) = (1 + \mathbf{x}^T \mathbf{x}'​) ^{p}
 $$ $$
-Plot the accuracy of the SVM, measured using the balnced ​success rate+Plot the accuracy of the SVM, measured using the balanced ​success rate
 as a function of both the soft-margin parameter of the SVM, and the free parameter as a function of both the soft-margin parameter of the SVM, and the free parameter
 of the kernel function. of the kernel function.
assignments/assignment3.txt · Last modified: 2016/09/20 09:34 by asa