{"id":555,"date":"2017-12-18T13:12:48","date_gmt":"2017-12-18T13:12:48","guid":{"rendered":"http:\/\/www.nullplug.org\/ML-Blog\/?p=555"},"modified":"2017-12-18T13:12:48","modified_gmt":"2017-12-18T13:12:48","slug":"problem-set-9","status":"publish","type":"post","link":"http:\/\/www.nullplug.org\/ML-Blog\/2017\/12\/18\/problem-set-9\/","title":{"rendered":"Problem Set 9"},"content":{"rendered":"<h2>Problem Set 9<\/h2>\n<p>This is to be completed by December 21st, 2017.<\/p>\n<h3>Exercises<\/h3>\n<ol>\n<li><a href=\"https:\/\/www.datacamp.com\/home\">Datacamp<\/a>\n<ul>\n<li>Complete the lesson:<br \/>\na. Intermediate R: Practice<\/li>\n<\/ul>\n<\/li>\n<li>R Lab:\n<ul>\n<li>Consider a two class classification problem with one class denoted <em>positive<\/em>. <\/li>\n<li>Given a list of probability predictions for the positive class, a list of the correct probabilities (0&#8217;s and 1&#8217;s), and a number N>=2 of data points, construct a function which produces an Nx2 matrix\/dataframe whose ith row (starting at 1) is the pair (x,y) where x is the false positive rate and y is the true positive rate of a classifier which classifies to true if the probability is greater or equal to (i-1)\/(N-1). <\/li>\n<li>Construct another function which produces the line graph associated to the points from the previous function .<\/li>\n<li>Finally, produce another function which estimates the area under the curve of the previous graph.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Problem Set 9 This is to be completed by December 21st, 2017. Exercises Datacamp Complete the lesson: a. Intermediate R: Practice R Lab: Consider a two class classification problem with one class denoted positive. Given a list of probability predictions for the positive class, a list of the correct probabilities (0&#8217;s and 1&#8217;s), and a &hellip; <a href=\"http:\/\/www.nullplug.org\/ML-Blog\/2017\/12\/18\/problem-set-9\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Problem Set 9&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"advanced_seo_description":"","jetpack_seo_html_title":"","jetpack_seo_noindex":false,"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[],"class_list":["post-555","post","type-post","status-publish","format-standard","hentry","category-general"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9dIpN-8X","jetpack_likes_enabled":true,"jetpack-related-posts":[{"id":214,"url":"http:\/\/www.nullplug.org\/ML-Blog\/2017\/10\/04\/linear-regression\/","url_meta":{"origin":555,"position":0},"title":"Linear Regression","author":"Justin Noel","date":"October 4, 2017","format":false,"excerpt":"Prediction is very difficult, especially about the future. - Niels Bohr The problem Suppose we have a list of vectors (which we can think of as samples) $x_1, \\cdots, x_m\\in \\Bbb R^n$ and a corresponding list of output scalars $y_1, \\cdots, y_m \\in \\Bbb R$ (which we can regard as\u2026","rel":"","context":"In &quot;Regression&quot;","block_context":{"text":"Regression","link":"http:\/\/www.nullplug.org\/ML-Blog\/category\/regression\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.nullplug.org\/ML-Blog\/wp-content\/uploads\/2017\/10\/trace.png?resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.nullplug.org\/ML-Blog\/wp-content\/uploads\/2017\/10\/trace.png?resize=350%2C200 1x, https:\/\/i0.wp.com\/www.nullplug.org\/ML-Blog\/wp-content\/uploads\/2017\/10\/trace.png?resize=525%2C300 1.5x"},"classes":[]},{"id":486,"url":"http:\/\/www.nullplug.org\/ML-Blog\/2017\/11\/03\/problem-set-3\/","url_meta":{"origin":555,"position":1},"title":"Problem Set 3","author":"Justin Noel","date":"November 3, 2017","format":false,"excerpt":"Problem Set 3 This is to be completed by November 9th, 2017. Exercises [Datacamp](https:\/\/www.datacamp.com\/home Complete the lesson \"Introduction to Machine Learning\". This should have also included \"Exploratory Data Analysis\". This has been added to the next week's assignment. MLE for the uniform distribution. (Source: Kaelbling\/Murphy) Consider a uniform distribution centered\u2026","rel":"","context":"In &quot;General&quot;","block_context":{"text":"General","link":"http:\/\/www.nullplug.org\/ML-Blog\/category\/general\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":61,"url":"http:\/\/www.nullplug.org\/ML-Blog\/2017\/09\/26\/probability-and-statistics-background\/","url_meta":{"origin":555,"position":2},"title":"Probability and Statistics Background","author":"Justin Noel","date":"September 26, 2017","format":false,"excerpt":"Statistics - A subject which most statisticians find difficult, but in which nearly all physicians are expert. - Stephen S. Senn Introduction For us, we will regard probability theory as a way of logically reasoning about uncertainty. I realize that this is not a precise mathematical definition, but neither is\u2026","rel":"","context":"In &quot;Supplementary material&quot;","block_context":{"text":"Supplementary material","link":"http:\/\/www.nullplug.org\/ML-Blog\/category\/supplementary-material\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":508,"url":"http:\/\/www.nullplug.org\/ML-Blog\/2017\/11\/09\/problem-set-4\/","url_meta":{"origin":555,"position":3},"title":"Problem Set 4","author":"Justin Noel","date":"November 9, 2017","format":false,"excerpt":"Problem Set 4 This is to be completed by November 16th, 2017. Exercises Datacamp Complete the lessons: a. Supervised Learning in R: Regression b. Supervised Learning in R: Classification c. Exploratory Data Analysis (If you did not already do so) Let $\\lambda\\geq 0$, $X\\in \\Bbb R^n\\otimes \\Bbb R^m$, $Y\\in \\Bbb\u2026","rel":"","context":"In &quot;General&quot;","block_context":{"text":"General","link":"http:\/\/www.nullplug.org\/ML-Blog\/category\/general\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":33,"url":"http:\/\/www.nullplug.org\/ML-Blog\/2017\/09\/26\/machine-learning-overview\/","url_meta":{"origin":555,"position":4},"title":"Machine Learning Overview","author":"Justin Noel","date":"September 26, 2017","format":false,"excerpt":"Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with it. Donald Knuth Introduction First Attempt at a Definition One says that an algorithm learns if its performance improves with\u2026","rel":"","context":"In &quot;General&quot;","block_context":{"text":"General","link":"http:\/\/www.nullplug.org\/ML-Blog\/category\/general\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/web.stanford.edu\/class\/cs234\/images\/header2.png?resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/web.stanford.edu\/class\/cs234\/images\/header2.png?resize=350%2C200 1x, https:\/\/i0.wp.com\/web.stanford.edu\/class\/cs234\/images\/header2.png?resize=525%2C300 1.5x, https:\/\/i0.wp.com\/web.stanford.edu\/class\/cs234\/images\/header2.png?resize=700%2C400 2x"},"classes":[]},{"id":538,"url":"http:\/\/www.nullplug.org\/ML-Blog\/2017\/11\/24\/problem-set-6\/","url_meta":{"origin":555,"position":5},"title":"Problem Set 6","author":"Justin Noel","date":"November 24, 2017","format":false,"excerpt":"Problem Set 6 This is to be completed by November 30th, 2017. Exercises Datacamp Complete the lesson: a. Text Mining: Bag of Words Exercises from Elements of Statistical Learning Complete exercises: a. 4.2 b. 4.6 Run the perceptron learning algorithm by hand for the two class classification problem with $(X,Y)$-pairs\u2026","rel":"","context":"In &quot;General&quot;","block_context":{"text":"General","link":"http:\/\/www.nullplug.org\/ML-Blog\/category\/general\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"http:\/\/www.nullplug.org\/ML-Blog\/wp-json\/wp\/v2\/posts\/555","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.nullplug.org\/ML-Blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.nullplug.org\/ML-Blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.nullplug.org\/ML-Blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/www.nullplug.org\/ML-Blog\/wp-json\/wp\/v2\/comments?post=555"}],"version-history":[{"count":1,"href":"http:\/\/www.nullplug.org\/ML-Blog\/wp-json\/wp\/v2\/posts\/555\/revisions"}],"predecessor-version":[{"id":556,"href":"http:\/\/www.nullplug.org\/ML-Blog\/wp-json\/wp\/v2\/posts\/555\/revisions\/556"}],"wp:attachment":[{"href":"http:\/\/www.nullplug.org\/ML-Blog\/wp-json\/wp\/v2\/media?parent=555"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.nullplug.org\/ML-Blog\/wp-json\/wp\/v2\/categories?post=555"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.nullplug.org\/ML-Blog\/wp-json\/wp\/v2\/tags?post=555"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}