woRdle Play

Intro After watching 3Blue1Brown’s video on solving Wordle using information theory, I’ve decided to try my own method using a similar method using probability. His take on using word frequency and combining this with expected information gain quantified by bits for finding the solution was interesting. This is a great approach, especially when playing against a person, who may chose to play a word that’s not in the predefined list of the official Wordle webiste. [Read More]

Linear Regression on Coffee Rating Data

While I am reading Elements of Statistical Learning, I figured it would be a good idea to try to use the machine learning methods introduced in the book. I just finished a chapter on linear regression, and learned more about linear regression and the penalized methods (Ridge and Lasso). Since there is an abundant resource available online, it would be redundant to get into the details. I’ll quickly go over Ordinary Least Squares, Ridge, and Lasso regression, and quickly show an application of those methods in R. [Read More]

UIUC Public GPA Dataset Exploration with Shiny

Last year, I thought it would be a good idea to dig through the GPA data set available from here. I started building a Shiny app that lets the user explore certain aspects of the data. Now, it’s almost been a year and I haven’t got the chance and the will to work on it until now. I made it really simple so that I can quickly move on to other topics instead of dragging this on for another year with an unfinished product. [Read More]

Grasping Power

I was reading a paper on calculation of sample sizes, and I inevitably came across the topic of statistical power. Essentially, when you’re designing on experiment, the sample size is an important factor to consider due to limiting resources. You want to have a sample size that is neither too small (which could result in high chance of failure to detect true differences) nor too big (potential waste of resources, albeit yielding better estimation). [Read More]

The Phi Function

I frequently encounter the \(\Phi\) and \(\Phi^{-1}\) functions in statistical texts. For some reason, the notation always throws me off guard, and I have to spend a few minutes visualizing. This post draws a definitive link between the functions and corresponding graphs. This ought to help me save some time and build more solid understanding of the concepts that make use of this. The \(\Phi\) function is simply cumulative distribution function, \(F\), of a standard normal distribution. [Read More]

Sorting Comparison Pt. 2

Load all the datasets that I’ve saved from the previous benchmarks set.seed(12345) library(microbenchmark) library(tidyverse) library(knitr) library(kableExtra) load("2019-03-01-sorting-comparison/sort_comparisons") Blowing off the Dust I see that in my environment, two variables, special_case_sort_time and trend_sort_time are loaded. It’s been a long time since I’ve created these data, so I have an unclear memory as to what these objects are. Usually I use str, class to understand they are. I also make use of head to quickly glance at the data usually if it is a data. [Read More]

Sorting Comparison

As I’m self studying algorithms and data structures with python from here, I figured I could try to do some experiments with different sorting algorithms using my own implementations in R. Types of sorting algorithms I will use: Bubble Sort Insertion Sort Selection Sort Shell Sort Merge Sort Quick Sort I will be dealing with a vector of type double. It can be a collection of any real positive numbers. [Read More]
coding 

Two-Dimension LDA

LDA, Linear Discriminant Analysis, is a classification method and a dimension reducion technique. I’ll focus more on classification. LDA calculates a linear discriminant function (which arises from assuming Gaussian distribution) for each class, and chooses a class that maximizes such function. The linear discriminant function therefore dictates a linear decision boundary for choosing a class. The decision boundary should be linear in the feature space. Discriminant analysis itself isn’t inherently linear. [Read More]

Covariance Matrix

In my first machine learning class, in order to learn about the theory behind PCA (Principal Component Analysis), we had to learn about variance-covariance matrix. I was concurrently taking a basic theoretical probability and statistics, so even the idea of variance was still vague to me. Despite the repeated attempts to understand covariance, I still had trouble fully capturing the intuition behind the covariance between two random variables. Even now, application and verification of correct usage of mathematical properties of covariance requires intensive googling. [Read More]
theory 

My First Post

This is the first blog post of my life! I will be using this blog to post about anything that I want to share in statistics. For starter, I will run a linear regression with the iris dataset. names(iris) ## [1] "Sepal.Length" "Sepal.Width" "Petal.Length" "Petal.Width" "Species" Let’s predict Sepal.Length with Petal.Length and Petal.Width. #separate into training and testing sets set.seed(1234) train_ind <- sample(nrow(iris), floor(0.8 * nrow(iris))) iris_train <- iris[train_ind,] iris_test <- iris[-train_ind,] #run linear regression iris_lm <- lm(Sepal. [Read More]