ml page 1 - indian institute of technology delhianupam/ml_notes.pdf- 1998, tom mitchell, machine...

Post on 19-Jun-2020

3 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Random Variables, Expectation, Variance, Conditional Distribution, Baye's Rule, Normal Distribution, Joint Distribution

Basic Calculus2.

Partial/Double Derivatives, Maxima, Minima

Linear Algebra3.

Matrices, Eigen Values (will cover), Vectors, Hyperplanes, convex optimization

Some background material is present on the website. Convex sets, convex functions, global/local optima

References: Class notes, Andrew Ng material

1998, Tom Mitchell, Machine Learning-

Tuda & Hart, Pattern Classification-

Christopher Bishop - Pattern Recognition and Machine Learning-

2012, K Murphy, Machine Learning - A probabilistic perspective-

2012, Peter Flach, ML - Art and Science of algos that make sense of data-

Books:

Saturday, January 7, 2017 4:00 AM

ML Page 1

Basics of MLSaturday, January 7, 2017 12:11 PM

ML Page 2

ML Page 3

ML Page 4

Know basic Machine Learning concepts/framework1.Various kinds of Machine Learning algorithms (8-10 algos) (Supervised/Unsupervised)2.Hands on experience with ML algorithms3.Art and Science behinc ML Systems4.

Things from the course:

Types of LearningSunday, January 8, 2017 5:19 PM

ML Page 5

ML Page 6

ML Page 7

ML Page 8

Wednesday, January 18, 2017 8:13 PM

ML Page 9

ML Page 10

ML Page 11

ML Page 12

ML Page 13

ML Page 14

ML Page 15

ML Page 16

ML Page 17

ML Page 18

Friday, January 20, 2017 10:29 PM

ML Page 19

ML Page 20

Friday, January 20, 2017 10:55 PM

ML Page 21

ML Page 22

Sunday, January 22, 2017 11:04 AM

ML Page 23

ML Page 24

Sunday, January 22, 2017 11:50 AM

ML Page 25

ML Page 26

ML Page 27

Sunday, January 29, 2017 12:24 PM

ML Page 28

7

ML Page 29

ML Page 30

Sunday, January 29, 2017 5:52 PM

ML Page 31

ML Page 32

Analytical Solution to gradient descentMonday, January 30, 2017 7:28 PM

ML Page 33

ML Page 34

Gaussian Discriminant Analysis (GDA)Monday, January 30, 2017 7:59 PM

ML Page 35

ML Page 36

ML Page 37

ML Page 38

ML Page 39

ML Page 40

ML Page 41

Sunday, January 22, 2017 11:52 AM

ML Page 42

Q1. a. [ 4.58687457 5.83129479]

Learning Rate = 0.001

Stopping Criteria = cost(Theta) - cost(ThetaNew) < 0.000001

(b)

c.

d.

Anupam Sobti2015ANZ8497

Assignment ReportFriday, February 10, 2017 8:06 PM

ML Page 43

e.

Observation: We can observe that as increases, the step size increases. After a certain limit, (in this case, ,

ML Page 44

Observation: We can observe that as increases, the step size increases. After a certain limit, (in this case, , the appropriate is never reached.

Q2.a.

Plot for linear regression

Implemented locally weighted linear regression for tao = 0.8

c.

ML Page 45

c.

The value of tao determines the area around a point which is considered in order to determine the piecewise linear model. If tao is too small, the curve starts to overfit and a value too large makes it equivalent to linear regression. The value of tao = 0.3 should work best since it neither fits too much, nor too loose.

ML Page 46

Q3. a.

b.

Q4.a.

Q4.b.

Q4.c.

ML Page 47

Q4.c.

Q4.d.

Q4.e.

f.The linear boundary, due to lack of information about the covariance is similar to what would be the boundary for logistic regression. It merely maximizes the distance of both classes from the boundary. The quadratic boundary however captures the way, change in one parameter changes the other parameter. Therefore, the boundary is able to show the boundary bent towards the Canada class. The reason intuitively can be seen that some of Alaska class candidates are present on the opposite side of the side. Therefore, the probability of Alaska class should be more.

ML Page 48

Monday, February 20, 2017 7:02 AM

ML Page 49

ML Page 50

ML Page 51

ML Page 52

ML Page 53

Wednesday, March 1, 2017 9:28 AM

ML Page 54

ML Page 55

Wednesday, March 1, 2017 10:02 AM

ML Page 56

ML Page 57

ML Page 58

ML Page 59

ML Page 60

ML Page 61

ML Page 62

ML Page 63

ML Page 64

The complete framework breaks down in case of an outlier. Therefore, we introduce a slack variable to deal with noise data/outliers.

SVMs with slackWednesday, March 1, 2017 2:49 PM

ML Page 65

ML Page 66

Non-linear SVMsWednesday, March 1, 2017 5:28 PM

ML Page 67

ML Page 68

* Exam question

ML Page 69

top related