intTypePromotion=1
zunia.vn Tuyển sinh 2024 dành cho Gen-Z zunia.vn zunia.vn
ADSENSE

Bài giảng Khai phá dữ liệu (Data mining): Support vector machine - Trịnh Tấn Đạt

Chia sẻ: _ _ | Ngày: | Loại File: PDF | Số trang:77

10
lượt xem
6
download
 
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Bài giảng Khai phá dữ liệu (Data mining): Support vector machine, chương này trình bày những nội dung về: ôn tập Đại số tuyến tính; bộ phân loại và biên độ phân loại; SVM tuyến tính - bài toán tối ưu hóa; phân loại ký quỹ cứng và mềm; SVM phi tuyến tính;... Mời các bạn cùng tham khảo chi tiết nội dung bài giảng!

Chủ đề:
Lưu

Nội dung Text: Bài giảng Khai phá dữ liệu (Data mining): Support vector machine - Trịnh Tấn Đạt

  1. Trịnh Tấn Đạt Khoa CNTT – Đại Học Sài Gòn Email: trinhtandat@sgu.edu.vn Website: https://sites.google.com/site/ttdat88/
  2. Contents  Introduction  Review of Linear Algebra  Classifiers & Classifier Margin  Linear SVMs: Optimization Problem  Hard Vs Soft Margin Classification  Non-linear SVMs
  3. Introduction  Competitive with other classification methods  Relatively easy to learn  Kernel methods give an opportunity to extend the idea to  Regression  Density estimation  Kernel PCA  Etc. 3
  4. Advantages of SVMs - 1  A principled approach to classification, regression and novelty detection  Good generalization capabilities  Hypothesis has an explicit dependence on data, via support vectors – hence, can readily interpret model 4
  5. Advantages of SVMs - 2  Learning involves optimization of a convex function (no local minima as in neural nets)  Only a few parameters are required to tune the learning machine (unlike lots of weights and learning parameters, hidden layers, hidden units, etc as in neural nets) 5
  6. Prerequsites  Vectors, matrices, dot products  Equation of a straight line in vector notation  Familiarity with  Perceptron is useful  Mathematical programming will be useful  Vector spaces will be an added benefit  The more comfortable you are with Linear Algebra, the easier this material will be 6
  7. What is a Vector ?  Think of a vector as a directed line segment in N-dimensions! (has “length” and a  “direction”)    v = b   Basic idea: convert geometry in higher dimensions into algebra! c     Once you define a “nice” basis along each dimension: x-, y-, z-axis … y  Vector becomes a 1 x N matrix!  v = [a b c]T v  Geometry starts to become linear algebra on vectors like v! x 7
  8. Vector Addition: A+B v + w = ( x1 , x 2 ) + ( y1 , y 2 ) = ( x1 + y1 , x 2 + y 2 ) A+B A A+B = C (use the head-to-tail method to B combine vectors) C B A 8
  9. Scalar Product: av a v = a ( x1 , x 2 ) = ( ax1 , ax 2 ) av v Change only the length (“scaling”), but keep direction fixed. Sneak peek: matrix operation (Av) can change length, direction and also dimensionality! 9
  10. Vectors: Magnitude (Length) and Phase (direction) v = ( x , x ,  , x )T 1 2 n n v =  x2 (Magnitude or “2-norm”) i i =1 If v = 1, a unit vector Alternate representations: (unit vector => pure direction) Polar coords: (||v||, ) Complex numbers: ||v||ej y ||v||  “phase” x 10
  11. Inner (dot) Product: v.w or wTv v  w v.w = ( x1 , x 2 ).( y1 , y 2 ) = x1 y1 + x 2 . y 2 The inner product is a SCALAR! v.w = ( x1 , x 2 ).( y1 , y 2 ) =|| v ||  || w || cos  v.w = 0  v ⊥ w If vectors v, w are “columns”, then dot product is wTv 11
  12. Projections w/ Orthogonal Basis  Get the component of the vector on each axis:  dot-product with unit vector on each axis! Aside: this is what Fourier transform does! Projects a function onto a infinite number of orthonormal basis functions: (ej or ej2n), and adds the results up (to get an equivalent “representation” in the “frequency” domain). 12
  13. Projection: Using Inner Products -1 p = a (aTx) ||a|| = aTa = 1 13
  14. Projection: Using Inner Products -2 p = a (aTb)/ (aTa) Note: the “error vector” e = b-p is orthogonal (perpendicular) to p. i.e. Inner product: (b-p)Tp = 0 14
  15. Review of Linear Algebra - 1  Consider w1x1+ w2x2 + b = 0 = wTx + b = w.x + b  In the x1x2-coordinate system, this is the equation of a straight line Proof: Rewrite this as x2 = (w1/w2)x1 + (1/w2) b = 0 Compare with y = m x + c This is the equation of a straight line with slope m = (w1/w2) and intercept c = (1/w2) 15
  16. Review of Liner Algebra - 2 1. w.x = 0 is the eqn of a st line through origin 2. w. x + b = 0 is the eqn of any straight line 3. w. x + b = +1 is the eqn of a straight line parallel to (2) on the positive side of Eqn (1) at a distance 1 4. w. x + b = -1 is the eqn of a straight line parallel to (2) on the negative side of Eqn (1) at a distance 1 16
  17. Define a Binary Classifier ▪ Define f as a classifier ▪ f = f (w, x, b) = sign (w.x + b) ▪ If f = +1, x belongs to Class 1 ▪ If f = - 1, x belongs to Class 2 ▪ We call f a linear classifier because w.x + b = 0 is a straight line. This line is called the class boundary 17
  18.  Linear Classifiers x f yest f(x,w,b) = sign(w x + b) denotes +1 w x + b>0 denotes -1 How would you classify this data? w x + b
  19.  Linear Classifiers x f yest f(x,w,b) = sign(w x + b) denotes +1 denotes -1 How would you classify this data? 19
  20.  Linear Classifiers x f yest f(x,w,b) = sign(w x + b) denotes +1 denotes -1 How would you classify this data? 20
ADSENSE

CÓ THỂ BẠN MUỐN DOWNLOAD

 

Đồng bộ tài khoản
9=>0