• ECE 657A Winter 2021 - Test 2

    Tags:

    #W21Test2
    #done

  • Scope:

    Content from live lectures and self-study videos from weeks 5 - 9 inclusive (but not including Deep Learning)

    Topics:

    Streaming Ensemble Methods. Feature Extraction, Dimensionality Reduction, Word Embeddings, SVM, Clustering Algorithms and Evaluation Measures, Anomaly Detection

    Date and Time:

    Was originally scheduled for March 11, will now be March 18-20

    • you will have a 72 period to start the test, March 18-20
    • then you will have a 3-hour window to actually do the test once you start

    The test will either be Crowdmark as a mixture of multiple-choice questions and submitted answers of calculations or derivations.

    Multiple Choice

    For multiple-choice questions it is possible for there to be multiple correct answers, if you choose any of the correct answers you will get the points, but selecting any incorrect answers will deduct a point.

    Submitted Answers

    Submitted answer questions can be either:

    • uploaded as an image just as you do for assignments (pdf files are also accepted, but if you can convert it to a png image it will be easier to grade).
    • typed in directly to the text box using Markdown and LaTeX formatting. You should be able to preview the text to see how it looks.
      Either method is perfectly acceptable, as long as it is neat and clear.

    #W21Test2
    #done

  • Q2 : SVM Classification

    For a two-class classification problem, we use an SVM classifier and obtain the following separating hyperplane. Four datapoints have been labelled in the training data. Refer to the following figure for the next four questions.

    svmfigures.png

    Tags:

    #W21Test2
    #done

  • Q3-1 : K-Means Calculation

    Given the points (purple circles) in the figure below, and the current means (red crosses) for each cluster boundary shown. In the next round of k-means some points will be assigned to different clusters.

    Fill in a table with the coordinates, in the form , of the points that would move, and the cluster they move to.

    Note: For all calculations use Manhatten distance, for example, , for any pair of points in .

    kmeansfigure.png

    Example markdown code:

    Note: this is example code for your table only, it is not implied how many points you should enter.

    | Point (X,Y) | Moves to ... |

    | --------- | ---------- |

    | $(x,y)$ | $K$ |

    | $(x,y)$ | $K$ |

    | $(x,y)$ | $K$ |

    | $(x,y)$ | $K$ |

    Answer:

    Point (X,Y) Moves to …

    Tags:

    #W21Test2
    #done

  • Q3-2: Hierarchical Clustering Calculation

    Referring to figure below, calculate the inter- and intra-cluster distances and decide on merging clusters.

    • Compute the single link and complete link distances
    • In the final row indicate which cluster would be merged during agglomerative clustering in each scenario

    clusteringfigures-100.png

    Example Markdown code

    | $d(X,Y)$ | Single Link | Complete Link |

    | --- | --- | --- |

    | $d(A,B)$ | ?? | ?? |

    | $d(A,C)$ | ?? | ?? |

    | $d(B,C)$ | ?? | ?? |

    | Merge $X$ and $Y$ | ?? | ?? |

    Answer:

    Single Link Complete Link
    6 11
    5 12
    4 10
    Merge and (B,C) (B,C)

    Tags:

    #W21Test2
    #done

  • Q4: Multiple Choice - Choose a Line

    Use the following figure for the next four questions
    The plot shows a set of points from the circle class and the square class.

    clusteringfigures-100.png

    Tags:

    #W21Test2
    #done

  • Q5 Drawing Algorithm Outputs

    Qualitative Behaviour of Classification and Clustering Algorithms

    You will compare the following pairs of classification or clustering algorithms by drawing datasets that one algorithm (X) can find a perfect separation and another algorithm (Y) cannot. This means that given a labelling for the points (you can draw them as X and O, or squares and circles) algorithm X will define an enclosing shape, or dividing lines, that perfectly group all the points correctly into their classes. For unsupervised clustering algorithms, we can also assume we know the true labels of the data points, even though the algorithm does not.

    Case Algorithm X Finds Perfect Classification/Cluster Separation Algorithm Y Does Not Find A Perfect Classification/Cluster Separation
    A X=DBScan Y=k-means
    B X=k-means Y=DBScan
    C X=Decision Trees Y=linear SVM

    In each case you should have two parts to your answer:

    1. [2 points] Draw a small 2D dataset with as many data points as you want. Also, draw the general shape of the expected class or cluster boundaries each algorithm would produce. You can use a solid line for the successful algorithm and a dotted line for the unsuccessful algorithm.
    2. [2 points] Explain briefly in words why Y fails on this dataset and why X succeeds.

    Note: For each part, be sure to clearly label the question A.1, A.2, B.1, B.2 etc. All the answers can be written on paper (or tablet) and uploaded as a photo (or saved as a pdf).

    #W21Test2
    #done

  • Derivation

    Eigenvector Derivation

    Assume that the optimization problem for finding a subspace is as follows. Let be the projection matrix onto the subspace and let be a symmetric matrix.
    Solve the optimization problem for finding the projection matrix.

    subject to

    Note: the answer can be typed in with markdown and latex or written on paper (or tablet) and attached as an image or pdf file.

    #W21Test2
    #done

  • Q1-1 : Why is dual PCA useful?

    • It is faster than PCA.
    • When the dimensionality of data is much less than the population of data, it is faster than PCA.
    • It is required to develop kernel PCA.

    Answers:

    • It is required to develop kernel PCA.

    Tags:

    #W21Test2
    #done

  • Q1-2: Is the following statement True or False?

    Even though it is linear, soft-margin SVM is a powerful algorithm because the support vectors are on the optimal hyperplane for dividing the points given the constraints.

    True or False?

    Answer:

    False. The support vectors are the closest points to the dividing hyperplane, they are not on the hyperplane.`

    Tags:

    #trueorfalse
    #svm #classification
    #W21Test2
    #done

  • Q2-1: What types of SVM algorithms are most likely being used to determine this decision boundary?

    • Linear SVM with Soft Margins
    • RBF Kernel SVM with Hard Margins
    • Gaussian Kernel SVM with Soft Margins
    • RBF Kernel SVM with Soft Margins

    Answer:

    • Gaussian Kernel SVM with Soft Margins
    • RBF Kernel SVM with Soft Margins

    Tags:

    #W21Test2
    #done

  • Q2-2: What would be the most likely effect of removing point B from the dataset on the resulting separating hyperplane?

    • Almost no effect
    • Small effect
    • Large effect

    Answer:

    • Almost no effect

    Tags:

    #W21Test2
    #done

  • Q2-3: Identify the point which will have the largest impact on the shape of the boundary upon its removal.

    • A
    • B
    • C
    • D

    Answer:

    • C

    Tags:

    #W21Test2
    #done

  • Q2-4: Provide a short but specific explanation for your reasoning for your answers to the previous two questions. Use the mathematical formulation of the basic SVM algorithm as the basis for your answer.

    Answer:

    Some detailed description involving which points are support vectors and which and how they would influence the optimization. Full marks for discussing that alpha will be zero for points which are not the support vectors.

    Tags:

    #W21Test2
    #done

  • Q4-1: Select the line which best fits the first principal component resulting from applying PCA on the datapoints and ignoring the labels.

    • A
    • B
    • C
    • D

    Answer:

    • C

    Tags:

    #W21Test2
    #done

  • Q4-2: Select the line which best fits the second principal component resulting from applying PCA on the datapoints and ignoring the labels.

    • A
    • B
    • C
    • D

    Answer:

    • A

    Tags:

    #W21Test2
    #done

  • Q4-3: Select the line which best fits the first eigenvector resulting from applying FDA to the datapoints and using the labels for training.

    • A
    • B
    • C
    • D

    Answer:

    • D

    Tags:

    #W21Test2
    #done

  • Answer:

    #W21Test2
    #done

{"cards":[{"_id":"605fb26c41b97f0409f16d91","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397269,"position":0.125,"parentId":null,"content":"# ECE 657A Winter 2021 - Test 2\n\n#### Tags:\n#W21Test2\n#done"},{"_id":"605fb26c41b97f0409f16d92","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397271,"position":0.1875,"parentId":null,"content":"## Scope:\nContent from live lectures and self-study videos from *weeks 5 - 9* inclusive (but not including Deep Learning)\n\n## Topics:\nStreaming Ensemble Methods. Feature Extraction, Dimensionality Reduction, Word Embeddings, SVM, Clustering Algorithms and Evaluation Measures, Anomaly Detection\n\n## Date and Time:\nWas originally scheduled for March 11, will now be **March 18-20**\n- you will have a 72 period to *start* the test, March 18-20\n- then you will have a 3-hour window to actually do the test once you start\n\nThe test will either be Crowdmark as a mixture of *multiple-choice questions* and *submitted answers* of calculations or derivations. \n\n### Multiple Choice\nFor multiple-choice questions it is **possible for there to be multiple correct answers**, if you choose any of the correct answers you will get the points, but selecting any incorrect answers will deduct a point.\n\n### Submitted Answers\nSubmitted answer questions can be either:\n- uploaded as an image just as you do for assignments (pdf files are also accepted, but if you can convert it to a png image it will be easier to grade).\n- typed in directly to the text box using Markdown and LaTeX formatting. You should be able to preview the text to see how it looks. \nEither method is perfectly acceptable, as long as it is neat and clear. \n\n#W21Test2\n#done"},{"_id":"605fb26c41b97f0409f16d7e","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397311,"position":1,"parentId":"605fb26c41b97f0409f16d92","content":"## Q1-1 : Why is dual PCA useful?\n\n\n- It is faster than PCA.\n- When the dimensionality of data is much less than the population of data, it is faster than PCA.\n- It is required to develop kernel PCA.\n\n#### Answers:\n- It is required to develop kernel PCA.\n\n#### Tags:\n#W21Test2\n#done"},{"_id":"605fb26c41b97f0409f16d88","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397312,"position":2,"parentId":"605fb26c41b97f0409f16d92","content":"## Q1-2: Is the following statement True or False?\nEven though it is linear, soft-margin SVM is a powerful algorithm because the support vectors are on the optimal hyperplane for dividing the points given the constraints.\n\nTrue or False?\n\n#### Answer:\n**False.** The support vectors are the closest points to the dividing hyperplane, they are not on the hyperplane.`\n\n#### Tags:\n#trueorfalse\n#svm #classification\n#W21Test2\n#done"},{"_id":"605fb26c41b97f0409f16d8a","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397317,"position":1.125,"parentId":null,"content":"## Q2 : SVM Classification\nFor a two-class classification problem, we use an SVM classifier and obtain the following separating hyperplane. Four datapoints have been labelled in the training data. Refer to the following figure for the next four questions.\n\n![svmfigures.png](https://usercontent.crowdmark.com/c851e932-9ca7-4711-9467-3733f2424192.png)\n\n\n#### Tags: \n#W21Test2\n#done\n"},{"_id":"605fb26c41b97f0409f16d8b","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397316,"position":1,"parentId":"605fb26c41b97f0409f16d8a","content":"### Q2-1: What types of SVM algorithms are most likely being used to determine this decision boundary?\n\n- Linear SVM with Soft Margins\n- RBF Kernel SVM with Hard Margins\n- Gaussian Kernel SVM with Soft Margins\n- RBF Kernel SVM with Soft Margins\n\n#### Answer:\n- `Gaussian Kernel SVM with Soft Margins`\n- `RBF Kernel SVM with Soft Margins`\n\n\n#### Tags: \n#W21Test2\n#done\n"},{"_id":"605fb26c41b97f0409f16d8c","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397318,"position":2,"parentId":"605fb26c41b97f0409f16d8a","content":"### Q2-2: What would be the most likely effect of removing point B from the dataset on the resulting separating hyperplane?\n\n- Almost no effect\n- Small effect\n- Large effect\n\n\n#### Answer:\n- Almost no effect\n\n#### Tags: \n#W21Test2\n#done"},{"_id":"605fb26c41b97f0409f16d8d","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397319,"position":3,"parentId":"605fb26c41b97f0409f16d8a","content":"### Q2-3: Identify the point which will have the *largest impact* on the shape of the boundary upon its removal.\n\n- A\n- B\n- C\n- D\n\n#### Answer:\n- C\n\n#### Tags: \n#W21Test2\n#done"},{"_id":"605fb26c41b97f0409f16d8e","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397320,"position":4,"parentId":"605fb26c41b97f0409f16d8a","content":"### Q2-4: Provide a short but specific explanation for your reasoning for your answers to the previous two questions. Use the mathematical formulation of the basic SVM algorithm as the basis for your answer.\n\n#### Answer:\nSome detailed description involving which points are support vectors and which and how they would influence the optimization. Full marks for discussing that alpha will be zero for points which are not the support vectors.\n\n#### Tags:\n#W21Test2\n#done"},{"_id":"605fb26c41b97f0409f16d96","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397330,"position":1.21875,"parentId":null,"content":"# Q3-1 : K-Means Calculation\nGiven the points (purple circles) in the figure below, and the current means (red crosses) for each cluster boundary shown. In the next round of k-means some points will be assigned to different clusters. \n\nFill in a table with the coordinates, in the form $(X,Y)$, of the points that would move, and the cluster they move to.\n\n*Note: For all calculations use Manhatten distance, for example, $d(c_i, c_j)=2$, for any pair of points in $C$.*\n\n\n![kmeansfigure.png](https://usercontent.crowdmark.com/4001e7ff-defc-49cf-b092-267d4e85f377.png)\n\n\n### Example markdown code: \n*Note:* this is example code for your table only, it is not implied how many points you should enter.\n\n`| Point (X,Y) | Moves to ... |`\n\n`| --------- | ---------- |`\n\n`| $(x,y)$ | $K$ |`\n\n`| $(x,y)$ | $K$ |` \n\n`| $(x,y)$ | $K$ |` \n\n`| $(x,y)$ | $K$ |`\n\n\n#### Answer:\n\n| Point (X,Y) | Moves to ... |\n| --------- | ---------- |\n| $(1,7)$ | $B$ |\n| $(8,6)$ | $C$ | \n\n#### Tags:\n#W21Test2\n#done\n"},{"_id":"605fb26c41b97f0409f16d97","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397331,"position":1.3125,"parentId":null,"content":"# Q3-2: Hierarchical Clustering Calculation\nReferring to figure below, calculate the inter- and intra-cluster distances and decide on merging clusters.\n- Compute the single link and complete link distances\n- In the final row indicate which cluster would be merged during agglomerative clustering in each scenario\n\n\n![clusteringfigures-100.png](https://usercontent.crowdmark.com/038da052-6e66-49b9-b04c-2f9593c20586.png)\n\n\n##### Example Markdown code\n`| $d(X,Y)$ | Single Link | Complete Link |`\n\n`| --- | --- | --- |`\n\n`| $d(A,B)$ | ?? | ?? |`\n\n`| $d(A,C)$ | ?? | ?? |`\n\n`| $d(B,C)$ | ?? | ?? |`\n\n`| Merge $X$ and $Y$ | ?? | ?? |`\n\n\n#### Answer:\n| $d(X,Y)$ | Single Link | Complete Link |\n| --- | --- | --- |\n| $d(A,B)$ | 6 | 11 |\n| $d(A,C)$ | 5 | 12 |\n| $d(B,C)$ | 4 | 10 |\n| Merge $X$ and $Y$ | (B,C) | (B,C) |\n\n#### Tags:\n#W21Test2\n#done\n"},{"_id":"605fb26c41b97f0409f16d7f","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397333,"position":1.5,"parentId":null,"content":"## Q4: Multiple Choice - Choose a Line\nUse the following figure for the next four questions\nThe plot shows a set of points from the circle class and the square class.\n\n![clusteringfigures-100.png](https://usercontent.crowdmark.com/f2d0ca50-1719-4114-9157-27d4cb3f6186.png)\n\n\n#### Tags:\n#W21Test2\n#done"},{"_id":"605fb26c41b97f0409f16d80","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397334,"position":1,"parentId":"605fb26c41b97f0409f16d7f","content":"### Q4-1: Select the line which best fits the *first* principal component resulting from applying PCA on the datapoints and ignoring the labels.\n- A\n- B\n- C\n- D\n\n#### Answer:\n- C\n\n#### Tags:\n#W21Test2\n#done\n"},{"_id":"605fb26c41b97f0409f16d81","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397338,"position":2,"parentId":"605fb26c41b97f0409f16d7f","content":"### Q4-2: Select the line which best fits the *second* principal component resulting from applying PCA on the datapoints and ignoring the labels.\n- A\n- B\n- C\n- D\n\n#### Answer:\n- A\n\n#### Tags:\n#W21Test2\n#done"},{"_id":"605fb26c41b97f0409f16d82","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397337,"position":3,"parentId":"605fb26c41b97f0409f16d7f","content":"### Q4-3: Select the line which best fits the first eigenvector resulting from applying FDA to the datapoints and using the labels for training.\n- A\n- B\n- C\n- D\n\n#### Answer:\n- D\n\n#### Tags:\n#W21Test2\n#done"},{"_id":"605fb26c41b97f0409f16d98","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397340,"position":3.3125,"parentId":null,"content":"## Q5 Drawing Algorithm Outputs\n### Qualitative Behaviour of Classification and Clustering Algorithms\n\nYou will compare the following pairs of classification or clustering algorithms by drawing datasets that one algorithm (X) can find a perfect separation and another algorithm (Y) cannot. This means that given a labelling for the points (you can draw them as X and O, or squares and circles) algorithm X will define an enclosing shape, or dividing lines, that perfectly group all the points correctly into their classes. For unsupervised clustering algorithms, we can also assume we know the true labels of the data points, even though the algorithm does not. \n\n| Case | Algorithm X Finds Perfect Classification/Cluster Separation | Algorithm Y Does Not Find A Perfect Classification/Cluster Separation |\n| ---- | ----------------------------------------------------------- | ------------------------------------------------------------ |\n| A | X=DBScan | Y=k-means |\n| B | X=k-means | Y=DBScan |\n| C | X=Decision Trees | Y=linear SVM \n\nIn each case you should have two parts to your answer:\n1. [2 points] **Draw** a small 2D dataset with as many data points as you want. Also, draw the *general shape* of the expected class or cluster *boundaries* each algorithm would produce. You can use a solid line for the successful algorithm and a dotted line for the unsuccessful algorithm. \n2. [2 points] **Explain** briefly in words why Y fails on this dataset and why X succeeds. \n\n*Note:* For each part, be sure to clearly label the question A.1, A.2, B.1, B.2 etc. All the answers can be written on paper (or tablet) and uploaded as a photo (or saved as a pdf).\n\n#W21Test2\n#done"},{"_id":"605fb26c41b97f0409f16d83","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397278,"position":4,"parentId":null,"content":"## Derivation\n### Eigenvector Derivation\nAssume that the optimization problem for finding a subspace is as follows. Let $\\textbf{U}$ be the projection matrix onto the subspace and let $\\textbf{A}$ be a symmetric matrix. \nSolve the optimization problem for finding the projection matrix. \n\n$$\\underset{\\textbf{U}}{\\text{minimize}} \\|\\textbf{X}\\textbf{A} - \\textbf{U}\\textbf{U}^\\top\\textbf{X}\\textbf{A}\\|_F^2$$\nsubject to\n$$\\textbf{U}^\\top \\textbf{U} = \\textbf{I}$$\n\n*Note:* the answer can be typed in with markdown and latex or written on paper (or tablet) and attached as an image or pdf file.\n\n#W21Test2\n#done"},{"_id":"605fb26c41b97f0409f16d84","treeId":"3c4307e7d38a0c1d4a0003ed","seq":22397344,"position":1,"parentId":"605fb26c41b97f0409f16d83","content":"#### Answer:\n\n$$\\def\\b{\\boldsymbol} \\begin{align*}\n&\\mathcal{L} = \\|\\b{X}\\b{A} - \\b{U}\\b{U}^\\top\\b{X}\\b{A}\\|_F^2 + \\textbf{tr}(\\b{\\Lambda}^\\top (\\b{U}^\\top \\b{U} - \\b{I})) \\\\\n& \\|\\b{X}\\b{A} - \\b{U}\\b{U}^\\top\\b{X}\\b{A}\\|_F^2 = \\textbf{tr}\\Big( (\\b{X}\\b{A} - \\b{U}\\b{U}^\\top\\b{X}\\b{A})^\\top (\\b{X}\\b{A} - \\b{U}\\b{U}^\\top\\b{X}\\b{A}) \\Big) \\\\\n& = \\textbf{tr}\\Big( (\\b{A}^\\top\\b{X}^\\top - \\b{A}^\\top\\b{X}^\\top\\b{U}\\b{U}^\\top) (\\b{X}\\b{A} - \\b{U}\\b{U}^\\top\\b{X}\\b{A}) \\Big) \\\\\n& = \\textbf{tr}\\Big( \\b{A}^\\top \\b{X}^\\top \\b{X} \\b{A} - \\b{A}^\\top \\b{X}^\\top \\b{U} \\b{U}^\\top \\b{X} \\b{A} - \\b{A}^\\top \\b{X}^\\top \\b{U} \\b{U}^\\top \\b{X} \\b{A} + \\b{A}^\\top \\b{X}^\\top \\b{U} \\underbrace{\\b{U}^\\top \\b{U}}_{\\b{I}} \\b{U}^\\top\\b{X}\\b{A} \\Big) \\\\\n& = \\textbf{tr}\\Big( \\b{A}^\\top \\b{X}^\\top \\b{X} \\b{A} - \\b{A}^\\top \\b{X}^\\top \\b{U} \\b{U}^\\top \\b{X} \\b{A} \\Big) = \\textbf{tr}\\Big( \\b{A} \\b{X}^\\top \\b{X} \\b{A} - \\b{A} \\b{X}^\\top \\b{U} \\b{U}^\\top \\b{X} \\b{A} \\Big) \\\\\n& = \\textbf{tr}\\Big( \\b{A}^2 \\b{X}^\\top \\b{X} - \\b{A}^2 \\b{X}^\\top \\b{U} \\b{U}^\\top \\b{X} \\Big) = \\textbf{tr}\\Big( \\b{A}^2 \\b{X}^\\top \\b{X} - \\b{X} \\b{A}^2 \\b{X}^\\top \\b{U} \\b{U}^\\top \\Big) \\\\\n&\\mathcal{L} = \\textbf{tr}\\Big( \\b{A}^2 \\b{X}^\\top \\b{X} - \\b{X} \\b{A}^2 \\b{X}^\\top \\b{U} \\b{U}^\\top \\Big) + \\textbf{tr}(\\b{\\Lambda}^\\top (\\b{U}^\\top \\b{U} - \\b{I})) \\\\\n&\\frac{\\partial \\mathcal{L}}{\\partial \\b{U}} = - 2 \\b{X} \\b{A}^2 \\b{X}^\\top \\b{U} + 2 \\b{\\Lambda} \\b{U} \\overset{\\text{set}}{=} \\b{0} \\implies \\b{X} \\b{A}^2 \\b{X}^\\top \\b{U} = \\b{\\Lambda} \\b{U} \\implies \\b{U} = \\text{eig}(\\b{X} \\b{A}^2 \\b{X}^\\top)\n\\end{align*}$$\n\n\n#W21Test2\n#done\n\n"}],"tree":{"_id":"3c4307e7d38a0c1d4a0003ed","name":"Test-W21-ECE657A-Test2","publicUrl":"test-w21-ece657a-test2","latex":true}}