This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | Next revision Both sides next revision | ||
assignments:assignment3 [2013/10/06 13:33] asa |
assignments:assignment3 [2013/10/06 15:22] asa |
||
---|---|---|---|
Line 14: | Line 14: | ||
Express the closest centroid algorithm in terms of kernels, i.e. determine how the coefficients $\alpha_i$ will be computed using a given labeled dataset. | Express the closest centroid algorithm in terms of kernels, i.e. determine how the coefficients $\alpha_i$ will be computed using a given labeled dataset. | ||
- | ===== Part 3: Using SVMs ===== | + | ==== Part 3: Soft-margin for separable data ==== |
+ | |||
+ | Consider training a soft-margin SVM | ||
+ | with $C$ set to some positive constant. Suppose the training data is linearly separable. | ||
+ | Since increasing the $\xi_i$ can only increase the objective of the primal problem (which | ||
+ | we are trying to minimize), at the optimal solution to the primal problem, all the | ||
+ | training examples will have $\xi_i$ equal | ||
+ | to zero. True or false? Explain! | ||
+ | Given a linearly separable dataset, is it necessarily better to use a | ||
+ | a hard margin SVM over a soft-margin SVM? | ||
+ | |||
+ | ===== Part 4: Using SVMs ===== | ||
The data for this question comes from a database called SCOP (structural | The data for this question comes from a database called SCOP (structural |