Posts

Showing posts from April, 2022

Foundations of Machine Learning CST 312 KTU CS Elective Notes

  Introduction About Me Syllabus What is Machine Learning ( video) Learn the seven steps in Machine Learning ( video) Overview Of Machine Learning Linear Algebra in Machine Learning Module-1 Linear Algebra 1.Geometry of Linear Equations (video) 2.Elimination with Matrices (video) 3.Solving System of equations using Gauss Elimination Method 4.Row Echelon form and Reduced Row Echelon Form 5.Solving system of equations Python code 6. Practice problems Gauss Elimination ( contact) 7.Finding Inverse using Gauss Jordan Elimination (video) 8.Finding Inverse using Gauss Jordan Elimination-Python code 9.Vector spaces and sub spaces 10.Linear Independence 11.Linear Independence, Basis and Dimension (video) 12.Generating set basis and span 13.Rank of a Matrix 14.Linear Mapping and Matrix Representation of Linear Mapping 15.Basis and Change of basis 16. Transformation Matrix in new Basis 17.Image and Kernel 18.Example Problems ( contact) Module 2: Linear Algebra 1.Vector Norms 2.Inner Products 3.

Moment Generating Functions

Image
This section develops and applies some of the properties of the moment-generating function. It turns out, despite its unlikely appearance, to be a very useful tool that can dramatically simplify certain calculations. The moment-generating function (mgf) of a random variable $X$ is  $M(t) =E(e^{t X })$  if the expectation is defined. In the discrete case, $M(t) =\sum_x e^{tx} p(x)$ and in the continuous case, $M(t) =\int_{-\infty}^{\infty}e^{tx} f (x) dx$ The expectation, and hence the moment-generating function, may or may not exist for any particular value of $t$. In the continuous case, the existence of the expectation depends on how rapidly the tails of the density decrease; for example, because the tails of the Cauchy density die down at the rate $x^{−2}$, the expectation does not exist for any $t$ and the moment-generating function is undefined. The tails of the normal density die down at the rate $e^{−x^2}$ , so the integral converges for all $t$. The $r $th moment of a rand

Syllabus Foundations of Machine Learning CST 312 KTU

Syllabus Module 1 (LINEAR ALGEBRA ) Systems of Linear Equations – Matrices, Solving Systems of Linear Equations. Vector Spaces - Linear Independence, Basis and Rank, Linear Mappings. Module 2 (LINEAR ALGEBRA ) Norms - Inner Products, Lengths and Distances, Angles and Orthogonality. Orthonormal Basis, Orthogonal Complement, Orthogonal Projections. Matrix Decompositions - Eigenvalues and Eigenvectors, Eigen decomposition and Diagonalization. Module 3 (PROBABILITY AND DISTRIBUTIONS) Probability Space - Sample Spaces, Probability Measures, Computing Probabilities,Conditional Probability, Baye’s Rule, Independence. Random Variables - Discrete Random Variables (Bernoulli Random Variables, Binomial Distribution, Geometric and Poisson Distribution, Continuous Random Variables (Exponential Density, Gamma Density, Normal Distribution, Beta Density) Module 4 (RANDOM VARIABLES) Functions of a Random Variable. Joint Distributions - Independent Random Variables, Conditional Distributions, Functions

Distributions derived from normal distribution

Image
$χ^2, t,$ and $F$ Distributions we know that the sum of independent gamma random variables that have the same value of $λ$ follows a gamma distribution, and therefore the chi-square distribution with $n$ degrees of freedom is a gamma distribution with $α = n/2$ and $λ = 1/2$ . Its density is From the density function of Proposition A, $f (t) = f (−t)$, so the $t$ distribution is symmetric about zero. As the number of degrees of freedom approaches infinity, the $t$ distribution tends to the standard normal distribution; in fact, for more than 20 or 30 degrees of freedom, the distributions are very close. Figure below shows several $t$ densities. Note that the tails become lighter as the degrees of freedom increase. It can be shown that, for $n > 2, E(W)$ exists and equals $n/(n − 2)$. From the definitions of the $t$ and $F$ distributions, it follows that the square of a $t_n$ random variable follows an $F_{1,n}$ distribution. The Sample Mean and the Sample Variance Let $X_1, . .