This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | Next revision Both sides next revision | ||
assignments:assignment5 [2015/11/05 10:35] asa |
assignments:assignment5 [2015/11/16 16:27] asa [Part 2: Embedded methods: L1 SVM] |
||
---|---|---|---|
Line 45: | Line 45: | ||
It has been argued in the literature that L1-SVMs often leads to solutions that are too sparse. As a workaround, implement the following strategy: | It has been argued in the literature that L1-SVMs often leads to solutions that are too sparse. As a workaround, implement the following strategy: | ||
- | * Create $k$ sub-samples of the data in which you randomly choose 80% of the examples. | + | * Create $k$ sub-samples of the training data. For each sub-sample randomly choose a subset consisting of 80% of the training examples. |
* For each sub-sample train an L1-SVM. | * For each sub-sample train an L1-SVM. | ||
- | * For each feature compute a score that is the number of sub-samples for which that feature yielded a non-zero score. | + | * For each feature compute a score that is the number of sub-samples for which that feature yielded a non-zero weight vector coefficient. |
In the next part of the assignment you will compare this approach to RFE and the Golub filter method that you implemented in part 1. | In the next part of the assignment you will compare this approach to RFE and the Golub filter method that you implemented in part 1. |