Skip to content

Latest commit

 

History

History
330 lines (191 loc) · 4.73 KB

AI.md

File metadata and controls

330 lines (191 loc) · 4.73 KB
marp paginate
true
true

AI: From the pages of science, to the future of our lives.

by Ahmadreza Anaami


Goals

In this lab you will:

  • have a brief introduction on AI and ML
    • basic concept of machine learning
  • main algorithm
    • supervised
      • regression
      • classification
    • unsupervised
  • Learn to implement linear regression

bg right:44%


bg 93%


machine learning

Subfield of AI in order to make intelligent machine

Field of study that gives the computers the ability to learn without explicitly programmed

  • Arthur samuel 1959

Supervised learning

  • learn from being given right answers
    • correct pairs of input (x) and output (y)

bg left:44%


input output application
email spam?(0,1)
audio text transcript
English Spanish
ad-User click?(0,1)
image-radar position of other car
image of phone defect?(0,1)

bg right:44%


Regression

predict a number from infinitely many possible outputs

bg right:55% 100%


Classification

many inputs some outputs called categories

Size diagnosis
6 1
8 1
2 0
5 0
1 0
7 1
5.6 1
12 1
3.5 0

bg right:55% 100%


bg right:100% 100%

bg left:100%


Recap

bg

In every living man a child is hidden that wants to play


bg right:100% 100%


Unsupervised learning

Find something intreating in (unlabeled) Data

  • Clustering
  • anomaly detection
  • Dimensionality reduction

bg left:60%


bg center: 100%


bg right:100% 100%

bg left:100%


bg left: 100%


linear regression

bg right:55% 100%

$$

f_{w,b}(x) = wx + b

$$

w , b > parameters , weight x > single feature


bg right:55% 100%

Cost function

$$J(w,b) = \frac{1}{2m} \sum\limits_{i = 0}^{m-1} (f_{w,b}(x^{(i)}) - y^{(i)})^2$$

where

$$yHat = f_{w,b}(x^{(i)}) = wx^{(i)} + b $$

m = number of example


bg center:70% 100%

bg left:70% 100%

Example

choose w to minimize J(w)

$$yHat = f_{w}(x^{(i)}) = wx^{(i)} $$

$$J(w) = \frac{1}{2m} \sum\limits_{i = 0}^{m-1} (f_{w}(x^{(i)}) - y^{(i)})^2$$


bg center:100% 100%

bg left:100% 100%


Gradient descent

a more systematic way to minimize j(w,b)

$$ \newline

\begin{align*} \text{repeat}&\text{ until convergence:} ; \lbrace \newline ; w &= w - \alpha \frac{\partial }{\partial w} J(w,b) ; \newline \newline b &= b - \alpha \frac{\partial }{\partial b} J(w,b) \newline \rbrace \end{align*}$$

bg left:50% 100%


Derivative of the Cost func

$$J(w,b) = \frac{1}{2m} \sum\limits_{i = 0}^{m-1} (f_{w,b}(x^{(i)}) - y^{(i)})^2$$

$$ \begin{align} \frac{\partial }{\partial w}J(w,b) &= \frac{1}{m} \sum\limits_{i = 0}^{m-1} (f_{w,b}(x^{(i)}) - y^{(i)})x^{(i)} \\ \frac{\partial }{\partial b}J(w,b) &= \frac{1}{m} \sum\limits_{i = 0}^{m-1} (f_{w,b}(x^{(i)}) - y^{(i)}) \\ \end{align} $$


optional

bg right:50% 100%

if alpha is too small :
if alpha is too big :

$$\alpha \frac{\partial }{\partial w} J(w,b)$$

calculate Derivative : ☻

Lets get our hands dirty

  • step 1
    • read our data
  • step 2
    • write our functions
      • calculate yHat
      • calculate cost
      • calculate gradient
      • implement gradient descent
  • step 3
    • congratulations

bg left:44%