SlideShare a Scribd company logo
1 of 83
Random Variables/Vectors
               Tomoki Tsuchida
   Computational & Cognitive Neuroscience Lab
       Department of Cognitive Science
       University of California, San Diego
Talk Outline
•       Random Variables Defined

•       Types of Random Variables
    ‣    Discrete
    ‣    Continuous

•       Characterizing Random Variables
    ‣    Expected Value
    ‣    Variance/Standard Deviation; Entropy
    ‣    Linear Combinations of Random Variables

•       Random Vectors Defined

•       Characterizing Random Vectors
    ‣    Expected Value
    ‣    Covariance
Random Variable

Elementary Outcomes of a
                                            The Real Line
   Random Experiment

                                                     =$1

   Flipping a coin once                              =$0

• Random variable is a function of each outcome.
• The probability of the r.v. (taking a particular value) is
determined by the probability of the outcome.
Example
Let X be the sum of the payoffs from two coin flips.


         P(X = 0) = P({TT}) = 1/4
     P(X=1) = P({TH}) = P({HT}) = 1/2
         P(X=2) = P({HH}) = 1/4

         The random variable X takes values {0, 1, 2},
               with probabilities {1/4, 1/2, 1/2}.
Talk Outline
•       Random Variables Defined

•       Types of Random Variables
    ‣    Discrete
    ‣    Continuous

•       Characterizing Random Variables
    ‣    Expected Value
    ‣    Variance/Standard Deviation; Entropy
    ‣    Linear Combinations of Random Variables

•       Random Vectors Defined

•       Characterizing Random Vectors
    ‣    Expected Value
    ‣    Covariance
Discrete Random Variables:
         Variables whose outcomes
           are separated by gaps




 Rolling a six-sided die once
                                Flipping a coin once
(and get paid for the number
                                (and get paid for H):
         on the face):
                                        {0,1}
         {1,2,3,4,5,6}
Discrete Random Variables:
 Defined by a probability mass function, P
     •P(X=a)=P(a)
     •1≥P(a)≥0
     •The probability of all outcomes
        sums to one (from the axiom!)
                      Rolling a fair six-sided die
              1.170
              0.878
              0.585
Probability
              0.293
                 0
                      1     2    3    4    5    6

                                Outcome
Types of Probability Mass Functions:
     Discrete Uniform Distribution

                       P(X = a) = 1 / N
   (where N is the total number of distinct outcomes)


                       Rolling a fair six-sided die
   €           1.170
               0.878
               0.585
 Probability
               0.293
                  0
                       1     2    3    4    5    6

                                 Outcome
Types of Probability Mass Functions:
         Binomial Distribution



                      Flipping a fair coin twice
              0.500

              0.375
Probability   0.250

              0.125

                 0
                        0          1          2

                            Number of Heads
Types of Probability Mass Functions:
          Binomial Distribution
                           ⎛ n ⎞ k      n−k
           pmf: P(X = k) = ⎜ ⎟ p (1− p)
                           ⎝ k ⎠
    k: the number of “successes” (in our case the
       outcome of heads is defined as success)
    p: probability of success in a single observation (in
        €
       our case .5)
    n: the number of observations (in our case two)
⎛ n ⎞     n!
⎜ ⎟ =            : the number of different ways you could
⎝ k ⎠ k!(n − k)!
               get k successes out of n observations
Talk Outline
•       Random Variables Defined

•       Types of Random Variables
    ‣    Discrete
    ‣    Continuous

•       Characterizing Random Variables
    ‣    Expected Value
    ‣    Variance/Standard Deviation; Entropy
    ‣    Linear Combinations of Random Variables

•       Random Vectors Defined

•       Characterizing Random Vectors
    ‣    Expected Value
    ‣    Covariance
Continuous Random Variables:
Variables for which an outcome always
  lies between two other outcomes




         A person’s height: ?≥a>0
Continuous Random Variables:
              Defined by probability density function

                      Discrete (pmf)   Continuous (pdf)
              0.500
Probability




              0.375


              0.250


              0.125


                 0
                        0    1   2

                Number of Heads
Continuous Random Variables:
 Probability of a range of outcomes
  p(179<=x<=181)                         p(x<=178)




                                   b
              p(a ≤ x ≤ b) =   ∫   a
                                       f (x)dx
 p(X=x)=0 (no single outcome has any probability!)
Continuous Random Variables:
  Defined by probability density function, f

                            Continuous



•f(a)≥0
•The area under the pdf
must equal 1
Types of Probability Density
                     Functions:

                     Continuous Uniform Distribution
if a≤x≤b:
           1
   f(x) =
          b−a
else:
        f(x) = 0

a = lower bound
b = upper bound
Types of Probability Density
                 Functions:

                           Normal (Gaussian) Distribution


                (x− µ )2
         1            2
f(x) =      e    2σ
       σ 2π
σ = standard deviation
µ = mean
Cumulative distribution function

What if we want to know P(X ≤ x)?
    Density function       Distribution function
Types of probability distributions

There are lots and lots of distributions!




   (But we can always look them on wikipedia!)
Talk Outline
•       Random Variables Defined

•       Types of Random Variables
    ‣    Discrete
    ‣    Continuous

•       Characterizing Random Variables
    ‣    Expected Value
    ‣    Variance/Standard Deviation; Entropy
    ‣    Linear Combinations of Random Variables

•       Random Vectors Defined

•       Characterizing Random Vectors
    ‣    Expected Value
    ‣    Covariance
Characterizing the distribution of a
          random variable

If we know the distribution of a
random variable, we pretty much
know all there is to know about
the random variable.
Characterizing the distribution of a
          random variable

If we know the distribution of a
random variable, we pretty much
know all there is to know about
the random variable.

But with real data, we don’t know
the full distribution.
Characterizing the distribution of a
          random variable

If we know the distribution of a
random variable, we pretty much
know all there is to know about
the random variable.

But with real data, we don’t know
the full distribution.
Characterizing the distribution of a
          random variable

If we know the distribution of a
random variable, we pretty much
know all there is to know about
the random variable.

But with real data, we don’t know
the full distribution.


So we want to characterize distributions by a
couple of numbers (“statistics”.)
Characterizing the Central Tendency
        of a Random Variable


                (x− µ )2
                           Normal (Gaussian) Distribution
         1            2
p(x) =      e    2σ
       σ 2π

σ = standard deviation
µ = mean

We know
everything from
mean and STD
A Simple Gambling Game

1. Flip a fair coin
2. Possible outcomes:


               I give you $2
                                 P(X=2)=1/2
                                 P(X=-1)=1/2
                You give me $1
E(X)=$.5

                         A Simple Gambling Game


         Probability Mass Function
              0.500
Probability




              0.375


              0.250


              0.125


                  0
                         -1   0   1   2

              Win/loss for you (in $)
E(X)=$.5

                         A Simple Gambling Game


         Probability Mass Function
              0.500
                                             If we played this game an
                                          infinite # of times, what would
Probability




              0.375
                                             the average outcome be?
              0.250


              0.125


                  0
                         -1   0   1   2

              Win/loss for you (in $)
E(X)=$.5

                         A Simple Gambling Game


         Probability Mass Function
              0.500
                                             If we played this game an
                                          infinite # of times, what would
Probability




              0.375
                                             the average outcome be?
              0.250
                                             µ = E(X) = ∑ P(X = x i )x i
              0.125


                  0                                Expected value
                         -1   0   1   2
                                 €         the “mean”
              Win/loss for you (in $)
Another Gambling Game
1. Roll a fair six-sided die
2. Possible outcomes:
          Die           Payoff
            1             $8
            2             -$1
            3             -$1
            4             -$1
            5             -$1
            6             -$1
Another Gambling Game


         Probability Mass Function
              0.900

                                             What’s the mean outcome
Probability




              0.675
                                                   of this game?
              0.450
                                              µ = E(X) = ∑ P(X = x i )x i
              0.225


                 0
                      -1 0 1 2 3 4 5 6 7 8
                                €
              Win/loss for you (in $)             E(X)=$.5
Why should you prefer the coin
                            game?
                      Coin Game                                     Die Game
              0.850                                        0.900
Probability




                                             Probability
              0.638                                        0.675


              0.425                                        0.450


              0.213                                        0.225


                 0                                            0
                      -1 0 1 2 3 4 5 6 7 8                         -1 0 1 2 3 4 5 6 7 8

              Win/loss for you (in $)                      Win/loss for you (in $)
Talk Outline
•       Random Variables Defined

•       Types of Random Variables
    ‣    Discrete
    ‣    Continuous

•       Characterizing Random Variables
    ‣    Expected Value
    ‣    Variance/Standard Deviation; Moments
    ‣    Linear Combinations of Random Variables

•       Random Vectors Defined

•       Characterizing Random Vectors
    ‣    Expected Value
    ‣    Covariance
Characterizing the Variability of a
                      Random Variable
                      Coin Game                                     Die Game
              0.850                                        0.900
Probability




                                             Probability
              0.638                                        0.675


              0.425                                        0.450


              0.213                                        0.225


                 0                                            0
                      -1 0 1 2 3 4 5 6 7 8                         -1 0 1 2 3 4 5 6 7 8

              Win/loss for you (in $)                      Win/loss for you (in $)
Variance: The expected value of the squared
            deviation from the mean
         Probability Mass Function               Variance shows the
              0.500                                “spread” of the
                                                     distribution.
Probability




              0.375


              0.250
                                        σ 2 = Var(X) = ∑ P(X = x i )(x i − µ) 2

              0.125

                                                   ANS: 2.25=9/4 dollars
                 0                 €               squared
                      -1   0   1   2

              Win/loss for you (in $)
Standard Deviation: The square root of the
               variance
         Probability Mass Function
              0.500
Probability




              0.375
                                        σ 2 = Var(X) = ∑ P(X = x i )(x i − µ) 2
              0.250
                                                 σ = Var(X)
              0.125
                                   €      (Why? Because variance
                 0
                      -1   0   1   2    €was in the units of X2. STD
                                           is in the same unit X.)
                                                   ANS: 1.5 dollars
              Win/loss for you (in $)
µ = $0.5, σ = $1.5                              µ = $0.5, σ ≈ $3.35
                      Coin Game                                         Die Game
              0.850                                            0.900

                                             €
Probability




                                                 Probability
              0.638                                            0.675


              0.425                                            0.450


              0.213                                            0.225


                 0                                                0
                      -1 0 1 2 3 4 5 6 7 8                             -1 0 1 2 3 4 5 6 7 8

              Win/loss for you (in $)                          Win/loss for you (in $)
Summary: Mean & Variance

                                Discrete Continuous
                 Definition
                                 R.V.s     R.V.s
                                                          ∞

Mean:µ            E(X)          ∑ p(x )x    i   i         ∫ p(x)xdx
                                    i                  −∞


                                                      ∞
                          2                                           2
Variance:σ   2
                 E((X − µ) )   ∑ p(x i )(x i − µ) 2   ∫ p(x)(x − µ) dx
                     €          i       €             −∞

   €
                      €                 €
    €
Moments
But why stop at the variance (~ 2nd moment?)

    3rd moment                   4t moment
      E(X 3 )                     E(X 4 )
     Skewness                     Kurtosis
Talk Outline
•       Random Variables Defined

•       Types of Random Variables
    ‣    Discrete
    ‣    Continuous

•       Characterizing Random Variables
    ‣    Expected Value
    ‣    Variance/Standard Deviation; Entropy
    ‣    Linear Combinations of Random Variables

•       Random Vectors Defined

•       Characterizing Random Vectors
    ‣    Expected Value
    ‣    Covariance
What happens if I scale a R.V.?


              Original Coin Game: X                              Y=2X
              0.500                                   0.500




                                        Probability
Probability




              0.375                                   0.375


              0.250                                   0.250


              0.125                                   0.125


                 0                                       0
                      -1   0   1   2                          -2 -1 0 1 2 3 4

              Win/loss for you (in $)                 Win/loss for you (in $)
What happens if I scale a R.V.?

                                                                         Y=2X
          The New Mean:                                       0.500




                                                Probability
µY = ∑ pY (2x i )2x i = 2∑ pX (x i )x i = 2µX                 0.375
      i                   i

µX = .5                                                       0.250

µY = 1
                                                              0.125


                                                                 0
                                                                      -2 -1 0 1 2 3 4

                                                              Win/loss for you (in $)
What happens if I scale a R.V.?

                                                                  Y=2X
         The New Variance:                             0.500

σ Y = ∑ pY (2x i )(2x i − µY ) 2 = ...
  2




                                         Probability
                                                       0.375
          i

∑ pY (2x i )(2x i − 2µX ) 2 = ...                      0.250
 i

4 ∑ pX (x i )(x i − µX ) 2 = 4σ X = 9
                                2
                                                       0.125
     i
                                                          0
                                                               -2 -1 0 1 2 3 4

                                                       Win/loss for you (in $)
What happens if I sum two
                        independent R.V.s?
                      One Round                                 Y=X+X
              0.500                                    0.500
Probability




                                        Probability
              0.375                                    0.375


              0.250                                    0.250


              0.125                                    0.125


                 0                                        0
                      -2 -1 0 1 2 3 4                          -2 -1 0 1 2 3 4

              Win/loss for you (in $)                 Win/loss for you (in $)
What happens if I sum two
        independent R.V.s?
                                             Y=X+X
     The New Mean:                  0.500


µY = µX + µX = 1


                     Probability
                                    0.375


                                    0.250
 The New Variance:
 2       2    2
σ = σ + σ = 4.5
 Y       X    X                     0.125


                                       0
                                            -2 -1 0 1 2 3 4

                                   Win/loss for you (in $)
What happens if I sum two independent
               identically distributed R.V.s?

                      One Round                                 Y=X+X
              0.500                                    0.500
Probability




                                        Probability
              0.375                                    0.375


              0.250                                    0.250


              0.125                                    0.125


                 0                                        0
                      -2 -1 0 1 2 3 4                          -2 -1 0 1 2 3 4

              Win/loss for you (in $)                 Win/loss for you (in $)
Expectation is linear

          E(aX) = aE(X)
      E(X + Y ) = E(X) + E(Y )
       E(X + c) = E(X) + c

We could’ve calculated the previous results using
               these properties!


     Exercise: what happens to Var(aX)
               and Var(X+Y) ?
What happens if I sum independent
identically distributed (i.i.d.) R.V.s?




                   1.500
     Probability



                   1.125

                   0.750

                   0.375

                      0
                            0      1

                           # of Heads
What happens if I sum independent
identically distributed (i.i.d.) R.V.s?




                   0.500
     Probability



                   0.375

                   0.250

                   0.125

                      0
                            0   1   2

                           # of Heads
What happens if I sum independent
identically distributed (i.i.d.) R.V.s?




                   0.4
     Probability



                   0.3

                   0.2

                   0.1

                    0
                         0   1   2   3

                         # of Heads
What happens if I sum independent
identically distributed (i.i.d.) R.V.s?




                   0.4
     Probability



                   0.3

                   0.2

                   0.1

                    0
                         0   1   2   3   4

                         # of Heads
What happens if I sum independent
         identically distributed (i.i.d.) R.V.s?


What’s happening to the
                          Mean of 75 flips
pmf?

Ans: it’s looking more
and more Gaussian
What happens if I sum independent
identically distributed (i.i.d.) R.V.s?

            Mean of 150 flips
Central Limit Theorem:
   The sum of i.i.d. random variables is approximately
   normally distributed when the number of random
    This is one reason why
                            variables is large.
     Gaussian variables are
     popularly assumed when
     doing statistical analysis
          Normal pdf
     or modeling. Another               Mean of 150 flips
     reason is that it’s
     mathematically simpler




from: Oxford Dictionary of Statistics
The sum of two or more r.v.’s with normal
            distributions are also normal distributions

The number of random
variables necessary to
make the sum
                           Normal pdf
approximately Gaussian
depends on the type of
population distribution
Continuous Uniform Distribution
Mean of 20 Observations




From: R. R. Wilcox (2003) Applying Contemporary
              Statistical Techniques
1 Observation




From: R. R. Wilcox (2003) Applying Contemporary
              Statistical Techniques
Mean of 20 Observations



From: R. R. Wilcox (2003) Applying Contemporary
              Statistical Techniques
1 Observation




From: R. R. Wilcox (2003) Applying Contemporary
              Statistical Techniques
mean of 25 samples




From: R. R. Wilcox (2003) Applying Contemporary
              Statistical Techniques
Wilcox says you need
100 samples from this
distribution to get a
decent approximation




                        mean of 50 samples




        From: R. R. Wilcox (2003) Applying Contemporary
                      Statistical Techniques
Entropy: Another measure of variability


         Probability Mass Function
              0.60


                                               H = −∑ p(x i )log 2 ( p(x i ))
Probability




              0.45


              0.30

                                           Any base is OK, but when base 2
              0.15
                                            is used entropy is said to be in
                                    €                units of “bits”
                0
                     Democrat Republican

                     UCSD voters
Entropy: Another measure of variability

              H = −∑ p(x i )log 2 ( p(x i ))

1. Entropy is minimal (H=0) when one outcome is
  certain
2. Entropy is maximal when each of the
    €
  k outcomes is equally likely
                              ⎛ 1 ⎞
               H max = −log 2 ⎜ ⎟ = log 2 k
                              ⎝ k ⎠

3. Entropy is a measure of information capacity.

     €
Talk Outline
•       Random Variables Defined

•       Types of Random Variables
    ‣    Discrete
    ‣    Continuous                        Do simple RT experiment


•       Characterizing Random Variables
    ‣    Expected Value
    ‣    Variance/Standard Deviation; Entropy
    ‣    Linear Combinations of Random Variables

•       Random Vectors Defined

•       Characterizing Random Vectors
    ‣    Expected Value
    ‣    Covariance
What about more than one
       random variable?

                             256 EEG sensors
120 million photoreceptors
Random Vectors
    •   An n dimensional random vector consists of n random
        variables all associated with the same probability space
        (i.e., each outcome dictates the value of every random
        variable)

    •   Example 2-D Random Vector:
              ⎡X ⎤     X=Reaction Time
          v = ⎢ ⎥
              ⎣Y ⎦     Y=Arm Length

    •   Sample m times from v:
            v1 v 2 v 3 ... v m
€
          ⎡x1    x2    x 3 ... x m ⎤
          ⎢                        ⎥
          ⎣y1    y2    y 3 ... y m ⎦
Probability Distribution of a
             Random Vector:


                                        Example: Two normal r.v.s:
“Joint distribution” of
   constituent r.v.s:

                          Probability
  p(v) = p(X,Y )



                                          Y
                                                              X
Probability Distribution of a
          Random Vector:

Scatterplot of 5000
   observations                     Example: Two normal r.v.s:



                      Probability




                                      Y
                                                          X
What will the scatterplot of
        our data look like?
A:                 B:




C:                 D:
Talk Outline
•       Random Variables Defined

•       Types of Random Variables
    ‣    Discrete
    ‣    Continuous

•       Characterizing Random Variables
    ‣    Expected Value
    ‣    Variance/Standard Deviation; Entropy
    ‣    Linear Combinations of Random Variables

•       Random Vectors Defined

•       Characterizing Random Vectors
    ‣    Expected Value
    ‣    Covariance
Expected Value of a Random Vector
    •    The expected value of a random vector, v, is simply the
         expected value of its constituent random variables.

    •    Example 2-D Random Vector:

             ⎡X ⎤
         v = ⎢ ⎥
             ⎣Y ⎦
                                 E(Y )             E(v)
           ⎡ E(X)⎤
    E(v) = ⎢       ⎥
           ⎣ E(Y ) ⎦
€
                         €               €
             ⎡µX ⎤
        µv = ⎢ ⎥
             ⎣µY ⎦                              E(X)
Variance of a Random Vector?
•   Is the variance of a random vector, v, simply the
    variance of its constituent random variables?

•   Example 2-D Random Vector:

                  ⎡X ⎤       2
                                   ⎡σ X ⎤
                                       2

              v = ⎢ ⎥      σ v = ⎢ 2 ⎥ ?
                  ⎣Y ⎦           ⎣σ Y ⎦


    €
                €
Variance of a Random Vector?
•   Is the variance of a random vector, v, simply the
    variance of its constituent random variables?

•   Example 2-D Random Vector:




                               X
                  ⎡X ⎤       2
                                   ⎡σ X ⎤
                                       2

              v = ⎢ ⎥      σ v = ⎢ 2 ⎥ ?
                  ⎣Y ⎦           ⎣σ Y ⎦


    €
                €
X & Y all have Variance of 2

A:                B:




       C:
Covariance Matrix of a Random Vector
 •    Diagonal entries are the variance of that dimension

 •    Off-diagonal entries are the covariance between the
      column and row dimensions
     ‣ Covariance between two random variables:
                    Cov(X,Y ) = E((X − µx )(Y − µy ))

         Note: Cov(X,Y ) = Cov(Y, X)
                    Cov(X,Y ) = 0 if X and Y are independent
     €
                    Cov(X,Y ) ∝ Corr(X,Y )

 •    Our 2-D example:
           ⎡X ⎤            ⎡ Var(X) Cov(Y, X)⎤
     € v = ⎢ ⎥         C = ⎢                  ⎥
           ⎣Y ⎦            ⎣Cov(X,Y ) Var(Y ) ⎦
Which Data=which Covariance Matrix?

A:                    B:




C:                       ⎡ 2 1.5⎤
                     Q = ⎢      ⎥
                         ⎣1.5 2 ⎦           ⎡2 0⎤
                                          S = ⎢   ⎥
                          ⎡ 2     −1.5⎤     ⎣0 2⎦
                      R = ⎢          ⎥
                          ⎣−1.5    2 ⎦
              €
Covariance of 0 does NOT entail
                independence!!

    •Recall:   Cov(X,Y ) ∝ Corr(X,Y )
                            Cov(X,Y )
               Corr(X,Y ) =
                             σ Xσ Y

    •PMF of two dependent variables with a
    covariance of 0:
    €     p(X = 1,Y = 0) = .25     p(X = 0,Y = 1) = .25
          p(X = −1,Y = 0) = .25    p(X = 0,Y = −1) = .25

    •Special case: If two normally distributed random
    variables have a covariance of 0, they ARE independent
€                       €
Talk Outline
•       Random Variables Defined

•       Types of Random Variables
    ‣    Discrete
    ‣    Continuous

•       Characterizing Random Variables
    ‣    Expected Value
    ‣    Variance/Standard Deviation; Entropy
    ‣    Linear Combinations of Random Variables

•       Random Vectors Defined

•       Characterizing Random Vectors
    ‣    Expected Value
    ‣    Covariance
Recommended Resources:
The Mathworld online math encyclopedia:
             http://mathworld.wolfram.com/

Gonzalez & Woods: Review Chapter on Linear
  Algebra, Probability, & Random Variables:
   http://www.imageprocessingplace.com/root_files_V3/
                         tutorials.htm

 Javier Movellan’s useful math facts:
        http://mplab.ucsd.edu/wordpress/?page_id=75
Dana Ballard’s Natural Computation
       (some good stuff)




                                    Dayan & Abbot
                                 Theoretical Neuroscience
Contemporary Data Analysis
Rand Wilcox, Applying Contemporary
      Statistical Techniques




                                      Sheldon Ross
                               A First Course in Probability
Recommended Free
  Stats Software



  www.r-project.org




     www.scipy.org

More Related Content

What's hot

Negative binomial distribution
Negative binomial distributionNegative binomial distribution
Negative binomial distributionNadeem Uddin
 
Geometric Distribution
Geometric DistributionGeometric Distribution
Geometric DistributionRatul Basak
 
Basic Concept Of Probability
Basic Concept Of ProbabilityBasic Concept Of Probability
Basic Concept Of Probabilityguest45a926
 
Discrete Probability Distributions
Discrete Probability DistributionsDiscrete Probability Distributions
Discrete Probability Distributionsmandalina landy
 
Conditional probability
Conditional probabilityConditional probability
Conditional probabilitysuncil0071
 
Introduction to Probability and Probability Distributions
Introduction to Probability and Probability DistributionsIntroduction to Probability and Probability Distributions
Introduction to Probability and Probability DistributionsJezhabeth Villegas
 
Geometric probability distribution
Geometric probability distributionGeometric probability distribution
Geometric probability distributionNadeem Uddin
 
Probability and Random Variables
Probability and Random VariablesProbability and Random Variables
Probability and Random VariablesSubhobrata Banerjee
 
Probability Distributions for Discrete Variables
Probability Distributions for Discrete VariablesProbability Distributions for Discrete Variables
Probability Distributions for Discrete Variablesgetyourcheaton
 
Sample Space and Event,Probability,The Axioms of Probability,Bayes Theorem
Sample Space and Event,Probability,The Axioms of Probability,Bayes TheoremSample Space and Event,Probability,The Axioms of Probability,Bayes Theorem
Sample Space and Event,Probability,The Axioms of Probability,Bayes TheoremBharath kumar Karanam
 
Introduction to random variables
Introduction to random variablesIntroduction to random variables
Introduction to random variablesHadley Wickham
 
4 2 continuous probability distributionn
4 2 continuous probability    distributionn4 2 continuous probability    distributionn
4 2 continuous probability distributionnLama K Banna
 

What's hot (20)

Basic concepts of probability
Basic concepts of probabilityBasic concepts of probability
Basic concepts of probability
 
Negative binomial distribution
Negative binomial distributionNegative binomial distribution
Negative binomial distribution
 
Geometric Distribution
Geometric DistributionGeometric Distribution
Geometric Distribution
 
Basic Concept Of Probability
Basic Concept Of ProbabilityBasic Concept Of Probability
Basic Concept Of Probability
 
Normal as Approximation to Binomial
Normal as Approximation to Binomial  Normal as Approximation to Binomial
Normal as Approximation to Binomial
 
Discrete Probability Distributions
Discrete Probability DistributionsDiscrete Probability Distributions
Discrete Probability Distributions
 
Conditional probability
Conditional probabilityConditional probability
Conditional probability
 
Introduction to Probability and Probability Distributions
Introduction to Probability and Probability DistributionsIntroduction to Probability and Probability Distributions
Introduction to Probability and Probability Distributions
 
Geometric probability distribution
Geometric probability distributionGeometric probability distribution
Geometric probability distribution
 
Continuous probability distribution
Continuous probability distributionContinuous probability distribution
Continuous probability distribution
 
Discrete and Continuous Random Variables
Discrete and Continuous Random VariablesDiscrete and Continuous Random Variables
Discrete and Continuous Random Variables
 
Probability and Random Variables
Probability and Random VariablesProbability and Random Variables
Probability and Random Variables
 
Conditional Probability
Conditional ProbabilityConditional Probability
Conditional Probability
 
8 random variable
8 random variable8 random variable
8 random variable
 
Binomial probability distributions
Binomial probability distributions  Binomial probability distributions
Binomial probability distributions
 
Probability Distributions for Discrete Variables
Probability Distributions for Discrete VariablesProbability Distributions for Discrete Variables
Probability Distributions for Discrete Variables
 
Sample Space and Event,Probability,The Axioms of Probability,Bayes Theorem
Sample Space and Event,Probability,The Axioms of Probability,Bayes TheoremSample Space and Event,Probability,The Axioms of Probability,Bayes Theorem
Sample Space and Event,Probability,The Axioms of Probability,Bayes Theorem
 
Introduction to random variables
Introduction to random variablesIntroduction to random variables
Introduction to random variables
 
Discrete probability distributions
Discrete probability distributionsDiscrete probability distributions
Discrete probability distributions
 
4 2 continuous probability distributionn
4 2 continuous probability    distributionn4 2 continuous probability    distributionn
4 2 continuous probability distributionn
 

Viewers also liked

Geometrically Constrained Independent Vector Analysis
Geometrically Constrained Independent Vector AnalysisGeometrically Constrained Independent Vector Analysis
Geometrically Constrained Independent Vector AnalysisAffan Khan
 
Principal Component Analysis for Tensor Analysis and EEG classification
Principal Component Analysis for Tensor Analysis and EEG classificationPrincipal Component Analysis for Tensor Analysis and EEG classification
Principal Component Analysis for Tensor Analysis and EEG classificationTatsuya Yokota
 
Relaxation of rank-1 spatial constraint in overdetermined blind source separa...
Relaxation of rank-1 spatial constraint in overdetermined blind source separa...Relaxation of rank-1 spatial constraint in overdetermined blind source separa...
Relaxation of rank-1 spatial constraint in overdetermined blind source separa...Daichi Kitamura
 
Continuous Random variable
Continuous Random variableContinuous Random variable
Continuous Random variableJay Patel
 
Discrete Probability Distributions
Discrete  Probability DistributionsDiscrete  Probability Distributions
Discrete Probability DistributionsE-tan
 
Discrete random variable.
Discrete random variable.Discrete random variable.
Discrete random variable.Shakeel Nouman
 
Discrete Random Variables And Probability Distributions
Discrete Random Variables And Probability DistributionsDiscrete Random Variables And Probability Distributions
Discrete Random Variables And Probability DistributionsDataminingTools Inc
 

Viewers also liked (7)

Geometrically Constrained Independent Vector Analysis
Geometrically Constrained Independent Vector AnalysisGeometrically Constrained Independent Vector Analysis
Geometrically Constrained Independent Vector Analysis
 
Principal Component Analysis for Tensor Analysis and EEG classification
Principal Component Analysis for Tensor Analysis and EEG classificationPrincipal Component Analysis for Tensor Analysis and EEG classification
Principal Component Analysis for Tensor Analysis and EEG classification
 
Relaxation of rank-1 spatial constraint in overdetermined blind source separa...
Relaxation of rank-1 spatial constraint in overdetermined blind source separa...Relaxation of rank-1 spatial constraint in overdetermined blind source separa...
Relaxation of rank-1 spatial constraint in overdetermined blind source separa...
 
Continuous Random variable
Continuous Random variableContinuous Random variable
Continuous Random variable
 
Discrete Probability Distributions
Discrete  Probability DistributionsDiscrete  Probability Distributions
Discrete Probability Distributions
 
Discrete random variable.
Discrete random variable.Discrete random variable.
Discrete random variable.
 
Discrete Random Variables And Probability Distributions
Discrete Random Variables And Probability DistributionsDiscrete Random Variables And Probability Distributions
Discrete Random Variables And Probability Distributions
 

Similar to Random Variables

Probabilitydistributionlecture web
Probabilitydistributionlecture webProbabilitydistributionlecture web
Probabilitydistributionlecture webkassimics
 
Lecture 4 Probability Distributions.pptx
Lecture 4 Probability Distributions.pptxLecture 4 Probability Distributions.pptx
Lecture 4 Probability Distributions.pptxABCraftsman
 
Probability distribution 2
Probability distribution 2Probability distribution 2
Probability distribution 2Nilanjan Bhaumik
 
Introduction to probability distributions-Statistics and probability analysis
Introduction to probability distributions-Statistics and probability analysis Introduction to probability distributions-Statistics and probability analysis
Introduction to probability distributions-Statistics and probability analysis Vijay Hemmadi
 
Mba i qt unit-4.1_introduction to probability distributions
Mba i qt unit-4.1_introduction to probability distributionsMba i qt unit-4.1_introduction to probability distributions
Mba i qt unit-4.1_introduction to probability distributionsRai University
 
Tele3113 wk1wed
Tele3113 wk1wedTele3113 wk1wed
Tele3113 wk1wedVin Voro
 
04 random-variables-probability-distributionsrv
04 random-variables-probability-distributionsrv04 random-variables-probability-distributionsrv
04 random-variables-probability-distributionsrvPooja Sakhla
 
Ssp notes
Ssp notesSsp notes
Ssp notesbalu902
 
Robustness under Independent Contamination Model
Robustness under Independent Contamination ModelRobustness under Independent Contamination Model
Robustness under Independent Contamination Modelrusmike
 
Probability distribution for Dummies
Probability distribution for DummiesProbability distribution for Dummies
Probability distribution for DummiesBalaji P
 
Statistics lecture 6 (ch5)
Statistics lecture 6 (ch5)Statistics lecture 6 (ch5)
Statistics lecture 6 (ch5)jillmitchell8778
 
Quantitative Techniques random variables
Quantitative Techniques random variablesQuantitative Techniques random variables
Quantitative Techniques random variablesRohan Bhatkar
 
Principles of Actuarial Science Chapter 2
Principles of Actuarial Science Chapter 2Principles of Actuarial Science Chapter 2
Principles of Actuarial Science Chapter 2ssuser8226b2
 

Similar to Random Variables (20)

Probabilitydistributionlecture web
Probabilitydistributionlecture webProbabilitydistributionlecture web
Probabilitydistributionlecture web
 
Stats chapter 7
Stats chapter 7Stats chapter 7
Stats chapter 7
 
Probability Distribution
Probability DistributionProbability Distribution
Probability Distribution
 
Lecture 4 Probability Distributions.pptx
Lecture 4 Probability Distributions.pptxLecture 4 Probability Distributions.pptx
Lecture 4 Probability Distributions.pptx
 
lecture4.ppt
lecture4.pptlecture4.ppt
lecture4.ppt
 
Probability distribution 2
Probability distribution 2Probability distribution 2
Probability distribution 2
 
Introduction to probability distributions-Statistics and probability analysis
Introduction to probability distributions-Statistics and probability analysis Introduction to probability distributions-Statistics and probability analysis
Introduction to probability distributions-Statistics and probability analysis
 
Mba i qt unit-4.1_introduction to probability distributions
Mba i qt unit-4.1_introduction to probability distributionsMba i qt unit-4.1_introduction to probability distributions
Mba i qt unit-4.1_introduction to probability distributions
 
Tele3113 wk1wed
Tele3113 wk1wedTele3113 wk1wed
Tele3113 wk1wed
 
lecture4.pdf
lecture4.pdflecture4.pdf
lecture4.pdf
 
04 random-variables-probability-distributionsrv
04 random-variables-probability-distributionsrv04 random-variables-probability-distributionsrv
04 random-variables-probability-distributionsrv
 
Ssp notes
Ssp notesSsp notes
Ssp notes
 
Robustness under Independent Contamination Model
Robustness under Independent Contamination ModelRobustness under Independent Contamination Model
Robustness under Independent Contamination Model
 
Fec512.02
Fec512.02Fec512.02
Fec512.02
 
Probability distribution for Dummies
Probability distribution for DummiesProbability distribution for Dummies
Probability distribution for Dummies
 
Statistics lecture 6 (ch5)
Statistics lecture 6 (ch5)Statistics lecture 6 (ch5)
Statistics lecture 6 (ch5)
 
Quantitative Techniques random variables
Quantitative Techniques random variablesQuantitative Techniques random variables
Quantitative Techniques random variables
 
Principles of Actuarial Science Chapter 2
Principles of Actuarial Science Chapter 2Principles of Actuarial Science Chapter 2
Principles of Actuarial Science Chapter 2
 
Price Models
Price ModelsPrice Models
Price Models
 
FEC 512.03
FEC 512.03FEC 512.03
FEC 512.03
 

Recently uploaded

Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsNanddeep Nachan
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Jeffrey Haguewood
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...apidays
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusZilliz
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWERMadyBayot
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024The Digital Insurer
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 

Recently uploaded (20)

Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 

Random Variables

  • 1. Random Variables/Vectors Tomoki Tsuchida Computational & Cognitive Neuroscience Lab Department of Cognitive Science University of California, San Diego
  • 2. Talk Outline • Random Variables Defined • Types of Random Variables ‣ Discrete ‣ Continuous • Characterizing Random Variables ‣ Expected Value ‣ Variance/Standard Deviation; Entropy ‣ Linear Combinations of Random Variables • Random Vectors Defined • Characterizing Random Vectors ‣ Expected Value ‣ Covariance
  • 3. Random Variable Elementary Outcomes of a The Real Line Random Experiment =$1 Flipping a coin once =$0 • Random variable is a function of each outcome. • The probability of the r.v. (taking a particular value) is determined by the probability of the outcome.
  • 4. Example Let X be the sum of the payoffs from two coin flips. P(X = 0) = P({TT}) = 1/4 P(X=1) = P({TH}) = P({HT}) = 1/2 P(X=2) = P({HH}) = 1/4 The random variable X takes values {0, 1, 2}, with probabilities {1/4, 1/2, 1/2}.
  • 5. Talk Outline • Random Variables Defined • Types of Random Variables ‣ Discrete ‣ Continuous • Characterizing Random Variables ‣ Expected Value ‣ Variance/Standard Deviation; Entropy ‣ Linear Combinations of Random Variables • Random Vectors Defined • Characterizing Random Vectors ‣ Expected Value ‣ Covariance
  • 6. Discrete Random Variables: Variables whose outcomes are separated by gaps Rolling a six-sided die once Flipping a coin once (and get paid for the number (and get paid for H): on the face): {0,1} {1,2,3,4,5,6}
  • 7. Discrete Random Variables: Defined by a probability mass function, P •P(X=a)=P(a) •1≥P(a)≥0 •The probability of all outcomes sums to one (from the axiom!) Rolling a fair six-sided die 1.170 0.878 0.585 Probability 0.293 0 1 2 3 4 5 6 Outcome
  • 8. Types of Probability Mass Functions: Discrete Uniform Distribution P(X = a) = 1 / N (where N is the total number of distinct outcomes) Rolling a fair six-sided die € 1.170 0.878 0.585 Probability 0.293 0 1 2 3 4 5 6 Outcome
  • 9. Types of Probability Mass Functions: Binomial Distribution Flipping a fair coin twice 0.500 0.375 Probability 0.250 0.125 0 0 1 2 Number of Heads
  • 10. Types of Probability Mass Functions: Binomial Distribution ⎛ n ⎞ k n−k pmf: P(X = k) = ⎜ ⎟ p (1− p) ⎝ k ⎠ k: the number of “successes” (in our case the outcome of heads is defined as success) p: probability of success in a single observation (in € our case .5) n: the number of observations (in our case two) ⎛ n ⎞ n! ⎜ ⎟ = : the number of different ways you could ⎝ k ⎠ k!(n − k)! get k successes out of n observations
  • 11. Talk Outline • Random Variables Defined • Types of Random Variables ‣ Discrete ‣ Continuous • Characterizing Random Variables ‣ Expected Value ‣ Variance/Standard Deviation; Entropy ‣ Linear Combinations of Random Variables • Random Vectors Defined • Characterizing Random Vectors ‣ Expected Value ‣ Covariance
  • 12. Continuous Random Variables: Variables for which an outcome always lies between two other outcomes A person’s height: ?≥a>0
  • 13. Continuous Random Variables: Defined by probability density function Discrete (pmf) Continuous (pdf) 0.500 Probability 0.375 0.250 0.125 0 0 1 2 Number of Heads
  • 14. Continuous Random Variables: Probability of a range of outcomes p(179<=x<=181) p(x<=178) b p(a ≤ x ≤ b) = ∫ a f (x)dx p(X=x)=0 (no single outcome has any probability!)
  • 15. Continuous Random Variables: Defined by probability density function, f Continuous •f(a)≥0 •The area under the pdf must equal 1
  • 16. Types of Probability Density Functions: Continuous Uniform Distribution if a≤x≤b: 1 f(x) = b−a else: f(x) = 0 a = lower bound b = upper bound
  • 17. Types of Probability Density Functions: Normal (Gaussian) Distribution (x− µ )2 1 2 f(x) = e 2σ σ 2π σ = standard deviation µ = mean
  • 18. Cumulative distribution function What if we want to know P(X ≤ x)? Density function Distribution function
  • 19. Types of probability distributions There are lots and lots of distributions! (But we can always look them on wikipedia!)
  • 20. Talk Outline • Random Variables Defined • Types of Random Variables ‣ Discrete ‣ Continuous • Characterizing Random Variables ‣ Expected Value ‣ Variance/Standard Deviation; Entropy ‣ Linear Combinations of Random Variables • Random Vectors Defined • Characterizing Random Vectors ‣ Expected Value ‣ Covariance
  • 21. Characterizing the distribution of a random variable If we know the distribution of a random variable, we pretty much know all there is to know about the random variable.
  • 22. Characterizing the distribution of a random variable If we know the distribution of a random variable, we pretty much know all there is to know about the random variable. But with real data, we don’t know the full distribution.
  • 23. Characterizing the distribution of a random variable If we know the distribution of a random variable, we pretty much know all there is to know about the random variable. But with real data, we don’t know the full distribution.
  • 24. Characterizing the distribution of a random variable If we know the distribution of a random variable, we pretty much know all there is to know about the random variable. But with real data, we don’t know the full distribution. So we want to characterize distributions by a couple of numbers (“statistics”.)
  • 25. Characterizing the Central Tendency of a Random Variable (x− µ )2 Normal (Gaussian) Distribution 1 2 p(x) = e 2σ σ 2π σ = standard deviation µ = mean We know everything from mean and STD
  • 26. A Simple Gambling Game 1. Flip a fair coin 2. Possible outcomes: I give you $2 P(X=2)=1/2 P(X=-1)=1/2 You give me $1
  • 27. E(X)=$.5 A Simple Gambling Game Probability Mass Function 0.500 Probability 0.375 0.250 0.125 0 -1 0 1 2 Win/loss for you (in $)
  • 28. E(X)=$.5 A Simple Gambling Game Probability Mass Function 0.500 If we played this game an infinite # of times, what would Probability 0.375 the average outcome be? 0.250 0.125 0 -1 0 1 2 Win/loss for you (in $)
  • 29. E(X)=$.5 A Simple Gambling Game Probability Mass Function 0.500 If we played this game an infinite # of times, what would Probability 0.375 the average outcome be? 0.250 µ = E(X) = ∑ P(X = x i )x i 0.125 0 Expected value -1 0 1 2 € the “mean” Win/loss for you (in $)
  • 30. Another Gambling Game 1. Roll a fair six-sided die 2. Possible outcomes: Die Payoff 1 $8 2 -$1 3 -$1 4 -$1 5 -$1 6 -$1
  • 31. Another Gambling Game Probability Mass Function 0.900 What’s the mean outcome Probability 0.675 of this game? 0.450 µ = E(X) = ∑ P(X = x i )x i 0.225 0 -1 0 1 2 3 4 5 6 7 8 € Win/loss for you (in $) E(X)=$.5
  • 32. Why should you prefer the coin game? Coin Game Die Game 0.850 0.900 Probability Probability 0.638 0.675 0.425 0.450 0.213 0.225 0 0 -1 0 1 2 3 4 5 6 7 8 -1 0 1 2 3 4 5 6 7 8 Win/loss for you (in $) Win/loss for you (in $)
  • 33. Talk Outline • Random Variables Defined • Types of Random Variables ‣ Discrete ‣ Continuous • Characterizing Random Variables ‣ Expected Value ‣ Variance/Standard Deviation; Moments ‣ Linear Combinations of Random Variables • Random Vectors Defined • Characterizing Random Vectors ‣ Expected Value ‣ Covariance
  • 34. Characterizing the Variability of a Random Variable Coin Game Die Game 0.850 0.900 Probability Probability 0.638 0.675 0.425 0.450 0.213 0.225 0 0 -1 0 1 2 3 4 5 6 7 8 -1 0 1 2 3 4 5 6 7 8 Win/loss for you (in $) Win/loss for you (in $)
  • 35. Variance: The expected value of the squared deviation from the mean Probability Mass Function Variance shows the 0.500 “spread” of the distribution. Probability 0.375 0.250 σ 2 = Var(X) = ∑ P(X = x i )(x i − µ) 2 0.125 ANS: 2.25=9/4 dollars 0 € squared -1 0 1 2 Win/loss for you (in $)
  • 36. Standard Deviation: The square root of the variance Probability Mass Function 0.500 Probability 0.375 σ 2 = Var(X) = ∑ P(X = x i )(x i − µ) 2 0.250 σ = Var(X) 0.125 € (Why? Because variance 0 -1 0 1 2 €was in the units of X2. STD is in the same unit X.) ANS: 1.5 dollars Win/loss for you (in $)
  • 37. µ = $0.5, σ = $1.5 µ = $0.5, σ ≈ $3.35 Coin Game Die Game 0.850 0.900 € Probability Probability 0.638 0.675 0.425 0.450 0.213 0.225 0 0 -1 0 1 2 3 4 5 6 7 8 -1 0 1 2 3 4 5 6 7 8 Win/loss for you (in $) Win/loss for you (in $)
  • 38. Summary: Mean & Variance Discrete Continuous Definition R.V.s R.V.s ∞ Mean:µ E(X) ∑ p(x )x i i ∫ p(x)xdx i −∞ ∞ 2 2 Variance:σ 2 E((X − µ) ) ∑ p(x i )(x i − µ) 2 ∫ p(x)(x − µ) dx € i € −∞ € € € €
  • 39. Moments But why stop at the variance (~ 2nd moment?) 3rd moment 4t moment E(X 3 ) E(X 4 ) Skewness Kurtosis
  • 40. Talk Outline • Random Variables Defined • Types of Random Variables ‣ Discrete ‣ Continuous • Characterizing Random Variables ‣ Expected Value ‣ Variance/Standard Deviation; Entropy ‣ Linear Combinations of Random Variables • Random Vectors Defined • Characterizing Random Vectors ‣ Expected Value ‣ Covariance
  • 41. What happens if I scale a R.V.? Original Coin Game: X Y=2X 0.500 0.500 Probability Probability 0.375 0.375 0.250 0.250 0.125 0.125 0 0 -1 0 1 2 -2 -1 0 1 2 3 4 Win/loss for you (in $) Win/loss for you (in $)
  • 42. What happens if I scale a R.V.? Y=2X The New Mean: 0.500 Probability µY = ∑ pY (2x i )2x i = 2∑ pX (x i )x i = 2µX 0.375 i i µX = .5 0.250 µY = 1 0.125 0 -2 -1 0 1 2 3 4 Win/loss for you (in $)
  • 43. What happens if I scale a R.V.? Y=2X The New Variance: 0.500 σ Y = ∑ pY (2x i )(2x i − µY ) 2 = ... 2 Probability 0.375 i ∑ pY (2x i )(2x i − 2µX ) 2 = ... 0.250 i 4 ∑ pX (x i )(x i − µX ) 2 = 4σ X = 9 2 0.125 i 0 -2 -1 0 1 2 3 4 Win/loss for you (in $)
  • 44. What happens if I sum two independent R.V.s? One Round Y=X+X 0.500 0.500 Probability Probability 0.375 0.375 0.250 0.250 0.125 0.125 0 0 -2 -1 0 1 2 3 4 -2 -1 0 1 2 3 4 Win/loss for you (in $) Win/loss for you (in $)
  • 45. What happens if I sum two independent R.V.s? Y=X+X The New Mean: 0.500 µY = µX + µX = 1 Probability 0.375 0.250 The New Variance: 2 2 2 σ = σ + σ = 4.5 Y X X 0.125 0 -2 -1 0 1 2 3 4 Win/loss for you (in $)
  • 46. What happens if I sum two independent identically distributed R.V.s? One Round Y=X+X 0.500 0.500 Probability Probability 0.375 0.375 0.250 0.250 0.125 0.125 0 0 -2 -1 0 1 2 3 4 -2 -1 0 1 2 3 4 Win/loss for you (in $) Win/loss for you (in $)
  • 47. Expectation is linear E(aX) = aE(X) E(X + Y ) = E(X) + E(Y ) E(X + c) = E(X) + c We could’ve calculated the previous results using these properties! Exercise: what happens to Var(aX) and Var(X+Y) ?
  • 48. What happens if I sum independent identically distributed (i.i.d.) R.V.s? 1.500 Probability 1.125 0.750 0.375 0 0 1 # of Heads
  • 49. What happens if I sum independent identically distributed (i.i.d.) R.V.s? 0.500 Probability 0.375 0.250 0.125 0 0 1 2 # of Heads
  • 50. What happens if I sum independent identically distributed (i.i.d.) R.V.s? 0.4 Probability 0.3 0.2 0.1 0 0 1 2 3 # of Heads
  • 51. What happens if I sum independent identically distributed (i.i.d.) R.V.s? 0.4 Probability 0.3 0.2 0.1 0 0 1 2 3 4 # of Heads
  • 52. What happens if I sum independent identically distributed (i.i.d.) R.V.s? What’s happening to the Mean of 75 flips pmf? Ans: it’s looking more and more Gaussian
  • 53. What happens if I sum independent identically distributed (i.i.d.) R.V.s? Mean of 150 flips
  • 54. Central Limit Theorem: The sum of i.i.d. random variables is approximately normally distributed when the number of random This is one reason why variables is large. Gaussian variables are popularly assumed when doing statistical analysis Normal pdf or modeling. Another Mean of 150 flips reason is that it’s mathematically simpler from: Oxford Dictionary of Statistics
  • 55. The sum of two or more r.v.’s with normal distributions are also normal distributions The number of random variables necessary to make the sum Normal pdf approximately Gaussian depends on the type of population distribution
  • 57. Mean of 20 Observations From: R. R. Wilcox (2003) Applying Contemporary Statistical Techniques
  • 58. 1 Observation From: R. R. Wilcox (2003) Applying Contemporary Statistical Techniques
  • 59. Mean of 20 Observations From: R. R. Wilcox (2003) Applying Contemporary Statistical Techniques
  • 60. 1 Observation From: R. R. Wilcox (2003) Applying Contemporary Statistical Techniques
  • 61. mean of 25 samples From: R. R. Wilcox (2003) Applying Contemporary Statistical Techniques
  • 62. Wilcox says you need 100 samples from this distribution to get a decent approximation mean of 50 samples From: R. R. Wilcox (2003) Applying Contemporary Statistical Techniques
  • 63. Entropy: Another measure of variability Probability Mass Function 0.60 H = −∑ p(x i )log 2 ( p(x i )) Probability 0.45 0.30 Any base is OK, but when base 2 0.15 is used entropy is said to be in € units of “bits” 0 Democrat Republican UCSD voters
  • 64. Entropy: Another measure of variability H = −∑ p(x i )log 2 ( p(x i )) 1. Entropy is minimal (H=0) when one outcome is certain 2. Entropy is maximal when each of the € k outcomes is equally likely ⎛ 1 ⎞ H max = −log 2 ⎜ ⎟ = log 2 k ⎝ k ⎠ 3. Entropy is a measure of information capacity. €
  • 65. Talk Outline • Random Variables Defined • Types of Random Variables ‣ Discrete ‣ Continuous Do simple RT experiment • Characterizing Random Variables ‣ Expected Value ‣ Variance/Standard Deviation; Entropy ‣ Linear Combinations of Random Variables • Random Vectors Defined • Characterizing Random Vectors ‣ Expected Value ‣ Covariance
  • 66. What about more than one random variable? 256 EEG sensors 120 million photoreceptors
  • 67. Random Vectors • An n dimensional random vector consists of n random variables all associated with the same probability space (i.e., each outcome dictates the value of every random variable) • Example 2-D Random Vector: ⎡X ⎤ X=Reaction Time v = ⎢ ⎥ ⎣Y ⎦ Y=Arm Length • Sample m times from v: v1 v 2 v 3 ... v m € ⎡x1 x2 x 3 ... x m ⎤ ⎢ ⎥ ⎣y1 y2 y 3 ... y m ⎦
  • 68. Probability Distribution of a Random Vector: Example: Two normal r.v.s: “Joint distribution” of constituent r.v.s: Probability p(v) = p(X,Y ) Y X
  • 69. Probability Distribution of a Random Vector: Scatterplot of 5000 observations Example: Two normal r.v.s: Probability Y X
  • 70. What will the scatterplot of our data look like? A: B: C: D:
  • 71. Talk Outline • Random Variables Defined • Types of Random Variables ‣ Discrete ‣ Continuous • Characterizing Random Variables ‣ Expected Value ‣ Variance/Standard Deviation; Entropy ‣ Linear Combinations of Random Variables • Random Vectors Defined • Characterizing Random Vectors ‣ Expected Value ‣ Covariance
  • 72. Expected Value of a Random Vector • The expected value of a random vector, v, is simply the expected value of its constituent random variables. • Example 2-D Random Vector: ⎡X ⎤ v = ⎢ ⎥ ⎣Y ⎦ E(Y ) E(v) ⎡ E(X)⎤ E(v) = ⎢ ⎥ ⎣ E(Y ) ⎦ € € € ⎡µX ⎤ µv = ⎢ ⎥ ⎣µY ⎦ E(X)
  • 73. Variance of a Random Vector? • Is the variance of a random vector, v, simply the variance of its constituent random variables? • Example 2-D Random Vector: ⎡X ⎤ 2 ⎡σ X ⎤ 2 v = ⎢ ⎥ σ v = ⎢ 2 ⎥ ? ⎣Y ⎦ ⎣σ Y ⎦ € €
  • 74. Variance of a Random Vector? • Is the variance of a random vector, v, simply the variance of its constituent random variables? • Example 2-D Random Vector: X ⎡X ⎤ 2 ⎡σ X ⎤ 2 v = ⎢ ⎥ σ v = ⎢ 2 ⎥ ? ⎣Y ⎦ ⎣σ Y ⎦ € €
  • 75. X & Y all have Variance of 2 A: B: C:
  • 76. Covariance Matrix of a Random Vector • Diagonal entries are the variance of that dimension • Off-diagonal entries are the covariance between the column and row dimensions ‣ Covariance between two random variables: Cov(X,Y ) = E((X − µx )(Y − µy )) Note: Cov(X,Y ) = Cov(Y, X) Cov(X,Y ) = 0 if X and Y are independent € Cov(X,Y ) ∝ Corr(X,Y ) • Our 2-D example: ⎡X ⎤ ⎡ Var(X) Cov(Y, X)⎤ € v = ⎢ ⎥ C = ⎢ ⎥ ⎣Y ⎦ ⎣Cov(X,Y ) Var(Y ) ⎦
  • 77. Which Data=which Covariance Matrix? A: B: C: ⎡ 2 1.5⎤ Q = ⎢ ⎥ ⎣1.5 2 ⎦ ⎡2 0⎤ S = ⎢ ⎥ ⎡ 2 −1.5⎤ ⎣0 2⎦ R = ⎢ ⎥ ⎣−1.5 2 ⎦ €
  • 78. Covariance of 0 does NOT entail independence!! •Recall: Cov(X,Y ) ∝ Corr(X,Y ) Cov(X,Y ) Corr(X,Y ) = σ Xσ Y •PMF of two dependent variables with a covariance of 0: € p(X = 1,Y = 0) = .25 p(X = 0,Y = 1) = .25 p(X = −1,Y = 0) = .25 p(X = 0,Y = −1) = .25 •Special case: If two normally distributed random variables have a covariance of 0, they ARE independent € €
  • 79. Talk Outline • Random Variables Defined • Types of Random Variables ‣ Discrete ‣ Continuous • Characterizing Random Variables ‣ Expected Value ‣ Variance/Standard Deviation; Entropy ‣ Linear Combinations of Random Variables • Random Vectors Defined • Characterizing Random Vectors ‣ Expected Value ‣ Covariance
  • 80. Recommended Resources: The Mathworld online math encyclopedia: http://mathworld.wolfram.com/ Gonzalez & Woods: Review Chapter on Linear Algebra, Probability, & Random Variables: http://www.imageprocessingplace.com/root_files_V3/ tutorials.htm Javier Movellan’s useful math facts: http://mplab.ucsd.edu/wordpress/?page_id=75
  • 81. Dana Ballard’s Natural Computation (some good stuff) Dayan & Abbot Theoretical Neuroscience
  • 82. Contemporary Data Analysis Rand Wilcox, Applying Contemporary Statistical Techniques Sheldon Ross A First Course in Probability
  • 83. Recommended Free Stats Software www.r-project.org www.scipy.org

Editor's Notes

  1. \n
  2. \n
  3. \n
  4. \n
  5. \n
  6. \n
  7. \n
  8. \n
  9. Bernoulli\n
  10. \n
  11. \n
  12. \n
  13. \n
  14. \n
  15. \n
  16. \n
  17. \n
  18. \n
  19. \n
  20. \n
  21. \n
  22. \n
  23. \n
  24. \n
  25. \n
  26. aka 1st moment\naka weighted average\n
  27. aka 1st moment\naka weighted average\n
  28. \n
  29. \n
  30. \n
  31. \n
  32. \n
  33. \n
  34. \n
  35. \n
  36. \n
  37. \n
  38. \n
  39. \n
  40. \n
  41. \n
  42. \n
  43. \n
  44. \n
  45. \n
  46. \n
  47. \n
  48. \n
  49. \n
  50. \n
  51. \n
  52. \n
  53. \n
  54. \n
  55. \n
  56. \n
  57. \n
  58. \n
  59. \n
  60. \n
  61. Mention Information, how if you know nothing, more information. unifrom has most info.\n
  62. \n
  63. \n
  64. \n
  65. \n
  66. \n
  67. \n
  68. \n
  69. \n
  70. \n
  71. \n
  72. \n
  73. \n
  74. \n
  75. \n
  76. \n
  77. \n
  78. \n
  79. \n
  80. \n
  81. \n