This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | Next revision Both sides next revision | ||
assignments:assignment3 [2015/10/02 12:12] asa |
assignments:assignment3 [2015/10/02 12:42] asa |
||
---|---|---|---|
Line 7: | Line 7: | ||
Formulate a soft-margin SVM without the bias term, i.e. one where the discriminant function is equal to $\mathbf{w}^{T} \mathbf{x}$. | Formulate a soft-margin SVM without the bias term, i.e. one where the discriminant function is equal to $\mathbf{w}^{T} \mathbf{x}$. | ||
Derive the saddle point conditions, KKT conditions and the dual. | Derive the saddle point conditions, KKT conditions and the dual. | ||
- | Compare it to the standard SVM formulation. | + | Compare it to the standard SVM formulation that was derived in class. |
- | As we discussed in class, SMO-type algorithms for the dual optimize the smallest number of variables at a time, which is two variables. | + | In class we discussed SMO-type algorithms for optimizing the dual SVM. At each step SMO optimizes two variables at a time, which is the smallest number possible. |
- | Is this still the case for the formulation you have derived? | + | Is this still the case for the formulation you have derived? In other words, is two the smallest number of variables that can be optimized at a time? |
Hint: consider the difference in the constraints. | Hint: consider the difference in the constraints. | ||