Category Archives: notes

Variance: regression, clustering, residual and variance

This is the translation of my recent post in Chinese. I was trying to talk in the way that a statistician would use after having stayed along with so many statistics people in the past years.

-------------------------------------------Start----------------------------

Variance is an interesting word. When we use it in statistics, it is defined as the "deviation from the center", which corresponds to the formula  , or in the matrix form (1 is a column vector with N*1 ones). From its definition it is the second (order) central moment, i.e. sum of the squared distance to the central. It measures how much the distribution deviates from its center -- the larger the sparser; the smaller the denser. This is how it works in the 1-dimension world. Many of you should be familiar with these.

Variance has a close relative called standard deviation, which is essentially the square root of variance, denoted by . There is also something called the six-sigma theory-- which comes from the 6-sigma coverage of a normal distribution.

79f0f736afc37931c22b82ecebc4b74542a911b7.jpg

Okay, enough on the single dimension case. Let's look at two dimensions then. Usually we can visualize the two dimension world with a scatter plot. Here is a famous one -- old faithful.

2014-12-27 23_41_46-Plot ZoomOld faithful is a "cone geyser located in Wyoming, in Yellowstone National Park in the United States (wiki)...It is one of the most predictable geographical features on Earth, erupting almost every 91 minutes." We can see there are about two hundreds points in this plot. It is a very interesting graph that can tell you much about Variance.

Here is the intuition. Try to use natural language (rather than statistical or mathematical tones) to describe this chart, for example when you take your 6 year old kid to the Yellowstone and he is waiting for next eruption. What would you tell him if you have this data set? Perhaps "I bet the longer you wait, the longer next eruption lasts. Let's  count the time!". Then the kid has a glance on your chart and say "No. It tells us that if we wait for more than one hour (70 minutes) then we will see a longer eruption in the next (4-5 minutes)". Which way is more accurate?

Okay... stop playing with kids. We now consider the scientific way. Frankly, which model will give us a smaller variance after processing?

Well, always Regression first. Such a strong positive relationship, right?  ( no causality.... just correlation)

2014-12-27 23_51_53-Plot Zoom

Now we obtain a significantly positive line though R-square from the linear model is only 81% (could it be better fitted?). Let's look at the residuals.

2014-12-27 23_59_10-Plot ZoomIt looks like that the residuals are sparsely distributed...(the ideal residual is white noise which carries no information). In this residual chart we can roughly identify two clusters -- so why don't we try clustering?

Before running any program, let's have a quick review the foundations of the K-means algorithm. In a 2-D world, we define the center as , then the 2-D variance is the sum of squares of each pint going to the center.

2014-12-28 00_09_03-Plot ZoomThe blue point is the center. No need to worry about the outlier's impact on the mean too much...it looks good for now. Wait... doesn't it feel like the starry sky at night? Just a quick trick and I promise I will go back to the key point.

 

2014-12-28 00_25_06-Plot Zoom

For a linear regression model, we look at the sum of squared residuals - the smaller the better fit is. For clustering methods, we can still look at such measurement: sum of squared distance to the center within each cluster. K-means is calculated by numerical iterations and its goal is to minimize such second central moment (refer to its loss function). We can try to cluster these stars to two galaxies here.

2014-12-28 00_32_00-Plot ZoomAfter clustering, we can calculate the residuals similarly - distance to the central (represents each cluster's position). Then the residual point.

 

2014-12-28 00_51_13-Plot ZoomRed ones are from K-means which the blue ones come from the previous regression. Looks similar right?... so back to the conversation with the kid -- both of you are right with about 80% accuracy.

Shall we do the regression again for each cluster?

2014-12-28 01_01_20-Plot ZoomNot many improvements. After clustering + regression the R-square increases to 84% (+3 points). This is because within each cluster it is hard to find any linear pattern of the residuals, and the regression line's slope drops from 10 to 6 and 4  respectively, while each sub-regression only delivers an R-square less than 10%... so not much information after clustering.  Anyway, it is better than a simple regression for sure. (the reason why we use k-means rather than some simple rules like x>3.5 is that k-means gives the optimized clustering results based on its loss function).

Here is another question: why do not we cluster to 3 or 5? It's more about overfitting... only 200 points here. If the sample size is big then we can try more clusters.

Fair enough. Of course statisticians won't be satisfied with these findings. The residual chart indicates an important information that the distribution of the residuals is not a standard normal distribution (not white noise). They call it heteroscedasticity. There are many forms of heteroscedasticity. The simplest one is residual increases when x increases. Other cases are in the following figure.

p109figureThe existence of heteroscedasticity makes our model (which is based on the training data set) less efficient. I'd like to say that statistical modelling is the process that we fight with residuals' distribution -- if we can diagnose any pattern then there is a way to improve the model. The econometricians prefer to name the residuals "rubbish bin" -- however it is also a gold mine in some sense. Data is a limited resource... wasting is luxurious.

Some additional notes...

Continue reading

my criticism

I tried to speak these in a relatively plain tone to make it sound less subjective. In recent days I have been listening to many researches from industry people and so here are my comments:

  • the industry is really good at renaming existing stuffs, to make it better interpreted, but these are perhaps decorations rather than fundamental innovations.
  • not many machine learning people think about their algorithms deeply. they see whether an algorithm works but they probably do not know when it would fail.
  • there is always a trade-off between consistency and efficiency, or the technical beauty and usefulness.

...just my opinion. It always depends on the purpose.

recent research trend of NBER working papers, since 2013

Recent days I got curious that what topics attract most of economists' attentions. NBER working paper series contain some relatively new research fruits so I used it as the raw input.

It is not hard to extract key words from these papers' titles. After that,  I made a further step that matched all single keys to academic keywords on Microsoft Academic.

NBER working paper trend

From a glance, I manage to identify some hot keys:

Price. Health. Social. Policy/Public. Risk/asset/liquidity. Growth. Insurance. Education/School.

For reproducible purpose, my code is here.

 

grab_url <- c("http://www.nber.org/new_archive/mar14.html",
              "http://www.nber.org/new_archive/dec13.html",
              "http://www.nber.org/new_archive/sep13.html",
              "http://www.nber.org/new_archive/jun13.html",
              "http://www.nber.org/new_archive/mar13.html")

library(RCurl)
require(XML)

grab_paper <- function (grab) {
  webpage <- getURLContent(grab)
  web_content <- htmlParse(webpage,asText = TRUE)
  paper_title <- sapply(getNodeSet(web_content, path="//li/a[1]"),xmlValue)
  author <- sapply(getNodeSet(web_content, path="//li/text()[1]") ,xmlValue)
  paper_author <- data.frame(paper_title = paper_title, author = author)
  return(paper_author)
}

library(plyr)
paper_all <- ldply(grab_url,grab_paper)

titles <- strsplit(as.character(paper_all$paper_title),split="[[:space:]|[:punct:]]")
titles <- unlist(titles)

library(tm)
library(SnowballC)
titles_short <- wordStem(titles)
Freq2 <- data.frame(table(titles_short))
Freq2 <- arrange(Freq2, desc(Freq))
Freq2 <- Freq2[nchar(as.character(Freq2$titles_short))>3,]
Freq2 <- subset(Freq2, !titles_short %in% stopwords("SMART"))
Freq2$word <- reorder(Freq2$titles_short,X = nrow(Freq2) - 1:nrow(Freq2))
Freq2$common <- Freq2$word %in% c("Evidenc","Effect","Econom","Impact","Experiment","Model","Measur","Rate","Economi",
                                  "High","Data","Long","Chang","Great","Estimat","Outcom","Program","Analysi","Busi"
                                  ,"Learn","More","What")
library(ggplot2)
ggplot(Freq2[1:100,])+geom_bar(aes(x=word,y=Freq,fill = common,alpha=!common))+coord_flip()

### get some keywords from Bing academic
start_id_Set = (0:5)*100+1
require(RCurl)
require(XML)
# start_id =1
# 

get_keywords_table <- function (start_id) {
  end_id = start_id+100-1
  keyword_url <- paste0("http://academic.research.microsoft.com/RankList?entitytype=8&topDomainID=7&subDomainID=0&last=0&start=",start_id,"&end=",end_id)
  keyword_page <- getURLContent(keyword_url)
  keyword_page <- htmlParse(keyword_page,asText = TRUE)
  keyword_table <- getNodeSet(keyword_page, path="id('ctl00_MainContent_divRankList')//table")
  table_df <- readHTMLTable(keyword_table[[1]])
  names(table_df) <- c("rowid","Keywords"   ,  "Publications" ,"Citations")
  return (table_df)
}

require(plyr)
keywords_set <- ldply(start_id_Set,get_keywords_table)

save(keywords_set, file="keywords_set.rdata")

 

to be an eBay seller...and explore the social network effect

It is interesting to work for eBay; however, most of the time I am looking at the massive data without really thinking about how they are generated... for instance, I have no idea how difficult it should be to be a successful seller on eBay. It somehow sounds a little wired that we talk about mechanism design (say, whether eBay's ecological system is better than Taobao's) without personal experience.

To step in, I should try - and there was a precious opportunity for me to do so! After the China-R conference in Beijing, two professors from Australia (Graham Williams and John Maindonald) asked if they could buy additional packs of the playing cards somewhere so they could share with their colleagues and friends. Then why not on eBay?

Then I typed "ebay.com" in my browser... log in...sell...The photos were prepared several months before... upload... done!

However, when I was trying to copy the link...it was so long!

http://www.ebay.com/itm/R-Language-Playing-Cards-Postcard-Suit-Best-Gift-Statisticians-/321134725623

Then I noticed the lovely "share to twitter" button on the left ^O^ Let me just use this one!

A simple click leads to a new tweet... of course I need to at some useR... Yihui should be the best choice 😛

Not sure what has happened during the night... when I wake up next morning, they are almost sold out (I listed 10!). Well... then the painful shipping process comes. Thanks to the fantastic e-packet tool(http://shippingtool.ebay.cn), it was pretty convenient to complete shipping online. OK, the postman will take care of them. Goodbye, my cards...

The special benefit is that now I realize how powerful the social network could be. Without spreading information through twitter, it would be much slower if we manually copy and paste the URL in emails. Social network sites control it just in the right way - "word of month" helps customer selects goods based on their reliable information source. Some comments:

Karthik Ram@_inundata 30 May

Totally bizarre #rstats playing cards for sale on @ebay. http://bit.ly/18BmwTg 

Geoff J@geoffjentry 30 May

@_inundata I once bought a "used wife" on @ebay yet somehow this is odder.

Ok...is it that wired?

BTW, the integration of eBay site and twitter is pretty good. Once the link is attached the listing image also appears. See below.2013-06-04 14_11_43-Karthik Ram (_inundata) on Twitter