House Price Investigation Via Python

Comprehensive Data Exploration With Python

The most difficult thing in life is to know yourself
We will be looking at the following topics

  1. Understand the problem. We'll look at each variable and do a philosophical analysis about their meaning and importance for this problem.
  2. Univariable study. We'll just focus on the dependent variable ('SalePrice') and try to know a little bit more about it.
  3. Multivariate study. We'll try to understand how the dependent variable and independent variables relate.
  4. Basic cleaning. We'll clean the dataset and handle the missing data, outliers and categorical variables.
  5. Test assumptions. We'll check if our data meets the assumptions required by most multivariate techniques.
In [2]:
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.stats import norm
from sklearn.preprocessing import StandardScaler
from scipy import stats
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
In [3]:
#bring in the six packs
df_train = pd.read_csv('Data/house price/train.csv')
In [4]:
#check the decoration
df_train.columns
Out[4]:
Index(['Id', 'MSSubClass', 'MSZoning', 'LotFrontage', 'LotArea', 'Street',
       'Alley', 'LotShape', 'LandContour', 'Utilities', 'LotConfig',
       'LandSlope', 'Neighborhood', 'Condition1', 'Condition2', 'BldgType',
       'HouseStyle', 'OverallQual', 'OverallCond', 'YearBuilt', 'YearRemodAdd',
       'RoofStyle', 'RoofMatl', 'Exterior1st', 'Exterior2nd', 'MasVnrType',
       'MasVnrArea', 'ExterQual', 'ExterCond', 'Foundation', 'BsmtQual',
       'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinSF1',
       'BsmtFinType2', 'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', 'Heating',
       'HeatingQC', 'CentralAir', 'Electrical', '1stFlrSF', '2ndFlrSF',
       'LowQualFinSF', 'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath',
       'HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'KitchenQual',
       'TotRmsAbvGrd', 'Functional', 'Fireplaces', 'FireplaceQu', 'GarageType',
       'GarageYrBlt', 'GarageFinish', 'GarageCars', 'GarageArea', 'GarageQual',
       'GarageCond', 'PavedDrive', 'WoodDeckSF', 'OpenPorchSF',
       'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea', 'PoolQC',
       'Fence', 'MiscFeature', 'MiscVal', 'MoSold', 'YrSold', 'SaleType',
       'SaleCondition', 'SalePrice'],
      dtype='object')

What can we expect?

In order to understand our data, we can and should look at each variable and try to understand their meaning and relevence to this problem. Although this can be time consuming, it will give us the flavour of our dataset.

In order to have some discipline in our analysis, we can create a table with the following columns:

  • Variable - Variable name.
  • Type - Identification of the variable type, i.e weather it's Numerical or Categorical. note Categorical may have number but the numbers can be categorised to a small set of numbers.
  • Segment - Identification of the variable segment, in this dataset we can have three possible segments: building,Space or Location. When we say Building we mean a variable that relates to the physical charecterisitcs of the building e.g (overallQual). When we say Space, we mean a variable that reports space properties of the house(e.g TotalBsmtSF). Finally when we say Location, we mean a variable that gives information about the place where the house is located(e.g neighborhood)
  • Expectation - Our expectation about the variable influence in SalesPrice. We can use a categorical scale with High, Medium and lowas possible values.
  • Conclusion - Our conclusion about the importance of the variables, after we give a quick look at the data. We can keep with the same categorical scale as in Expecation.
  • Comment - Any general comments that occured to us.

While Type and Segment is just for possible future reference, the column Expecation is important because it will help us develp a Sixth Sence. To fill this column, we should understand the variable and one by one:

Ask our selves:

  • Do we think this variable when we are buying a House?(e.g When we think about the house of our dreams do we think about it's Masonry veneer type ?)
  • If so, how important should the variable be?(e.g What is the impact of having Excellent material on the exterior instad of Poor? and of having Excellent instead of Good ?)
  • Is this information already described in any other variable?( e.g if LandContour gives the flatness of the propoerty, do we really need to know the LandSlope?)
After this daunting exercise we can filter our table and look carefully to the variable with High Expectations. Then, we can righ into some scatter plots between those variable and the Target variable SalesPrice, filling in the Concolusion column which is just the correction of our expectations.

I went through this proces and concluded the following variables can play an important role in this problem:

  • OverallQual
  • YearBuilt
  • TotalBsmSF
  • GrLiveArea

I ended up with two 'building' variables ('OverallQual' and 'YearBuilt') and two 'space' variables ('TotalBsmtSF' and 'GrLivArea'). This might be a little bit unexpected as it goes against the real estate mantra that all that matters is 'location, location and location'. It is possible that this quick data examination process was a bit harsh for categorical variables. For example, I expected the 'Neigborhood' variable to be more relevant, but after the data examination I ended up excluding it. Maybe this is related to the use of scatter plots instead of boxplots, which are more suitable for categorical variables visualization. The way we visualize data often influences our conclusions.


Analysing the Target Variable - SalesPrice

In [5]:
#descriptive statistics summary
df_train['SalePrice'].describe()
Out[5]:
count      1460.000000
mean     180921.195890
std       79442.502883
min       34900.000000
25%      129975.000000
50%      163000.000000
75%      214000.000000
max      755000.000000
Name: SalePrice, dtype: float64

Great the Minimum price is not $0$. and there are no abnormalities in the description that would cause bias to the model and destroy it.

Let's take a better a look at the target variable.

In [6]:
#histogram
sns.distplot(df_train['SalePrice']);

We can see that the SalesPrice:

  • Deviate from the Normal Distribution.
  • Have appreciable positive skewness.
  • Show peakedness.

Lets take a look at the Skewness and the Kurtosis of our target variable

In [7]:
#skewness and kurtosis
print("Skewness: %f" % df_train['SalePrice'].skew())
print("Kurtosis: %f" % df_train['SalePrice'].kurt())
Skewness: 1.882876
Kurtosis: 6.536282

Relationship of the target variable with the Numerical variables

In [8]:
#scatter plot grlivarea/saleprice
var = 'GrLivArea'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
data.plot.scatter(x=var, y='SalePrice', ylim=(0,800000));

It looks like SalesPrice and GrLiveArea are correlated with a Linear Relationship

In [9]:
#scatter plot totalbsmtsf/saleprice
var = 'TotalBsmtSF'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
data.plot.scatter(x=var, y='SalePrice', ylim=(0,800000));

TotalBsmtSf also looks to have a correlation with the target variable SalesPrice.

Relationship with the Categorical features

In [10]:
#box plot overallqual/saleprice
var = 'OverallQual'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);

SalesPrice is clearly affected by the OverQual as we can see in the chart above.

In [11]:
var = 'YearBuilt'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
f, ax = plt.subplots(figsize=(16, 8))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
plt.xticks(rotation=90);

In Summary:

  • 'GrLivArea' and 'TotalBsmtSF' seem to be linearly related with 'SalePrice'. Both relationships are positive, which means that as one variable increases, the other also increases. In the case of 'TotalBsmtSF', we can see that the slope of the linear relationship is particularly high.
  • 'OverallQual' and 'YearBuilt' also seem to be related with 'SalePrice'. The relationship seems to be stronger in the case of 'OverallQual', where the box plot shows how sales prices increase with the overall quality.

We just analysed four variables, but there are many other that we should analyse. The trick here seems to be the choice of the right features (feature selection) and not the definition of complex relationships between them (feature engineering).

That said, let's separate the wheat from the chaff.

Objective Analysis of the variables

Untill now we have followed our intuition and analysed the variables we thought were important. In spite of our efforts to give an objective character to our analysis, we must say that our starting point was subjective.

Let's overcome inertia and do a more objective analysis.


Plasma Soup

In the very beginning there was nothing except for a plasma soup. What is known of these brief moments in time, at the start of our study of cosmology, is largely conjectural. However, science has devised some sketch of what probably happened, based on what is known about the universe today.


To explore the universe we start with some practical recipes to make sense of our Plasma Soup

  • Corelation Matrix visualised with Heat Map
  • Target Variable correlation matrix ( Zoomed Heat Map)
  • Scatter plots between the most correlated variables
In [14]:
corrmat = df_train.corr()
f, ax = plt.subplots(figsize = (12,9))
sns.heatmap(data = corrmat, vmax = 0.8, square = True);

At first sight, there are two red colored squares that get my attention. The first one refers to the 'TotalBsmtSF' and '1stFlrSF' variables, and the second one refers to the 'GarageX' variables. Both cases show how significant the correlation is between these variables. Actually, this correlation is so strong that it can indicate a situation of multicollinearity. If we think about these variables, we can conclude that they give almost the same information so multicollinearity really occurs.

Heatmaps are great to detect this kind of situations and in problems dominated by feature selection, like ours,they are an essential tool.

Another thing that got my attention was the 'SalePrice' correlations. We can see our well-known 'GrLivArea', 'TotalBsmtSF', and 'OverallQual' saying a big 'Hi!', but we can also see many other variables that should be taken into account. That's what we will do next.

Sales Price Correlation , Zoomed heatmap style
In [16]:
k =10
cols = corrmat.nlargest(k,'SalePrice')['SalePrice'].index
cm   = np.corrcoef(df_train[cols].values.T)
sns.set(font_scale = 1.25)
hm = sns.heatmap(data = cm, cbar = True, annot=True, square=True, fmt = '.2f', annot_kws = {'size':10},
                yticklabels=cols.values, xticklabels=cols.values)
plt.show()

According to our crystal ball, these are the variables most correlated with 'SalePrice'. My thoughts on this:

  • OverallQual, GrLivingArea and TotalBsmtSF are strongly correlated with SalePrice. Check!
  • 'GarageCars' and 'GarageArea' are also some of the most strongly correlated variables. However, as we discussed in the last sub-point, the number of cars that fit into the garage is a consequence of the garage area. 'GarageCars' and 'GarageArea' are like twin brothers. You'll never be able to distinguish them. Therefore, we just need one of these variables in our analysis (we can keep 'GarageCars' since its correlation with 'SalePrice' is higher).
  • TotalBsmtSF' and '1stFloor' also seem to be twin brothers. We can keep 'TotalBsmtSF' just to say that our first guess was right
  • 'FullBath'?? Really?
  • 'TotRmsAbvGrd' and 'GrLivArea', twin brothers again.
  • Ah... 'YearBuilt'... It seems that 'YearBuilt' is slightly correlated with 'SalePrice'. Honestly, it scares me to think about 'YearBuilt' because I start feeling that we should do a little bit of time-series analysis to get this right. More on that later.
Let's Proceed to the scatter plots

Scatter plots between the SalePrice and the correlated variables, get ready to what you are about to see. I must confess the first time I saw these scatter plots I was blown away! So much information in so short space..It's just amazing, Shout out to @Seaborn for their great library.

In [18]:
sns.set()
cols = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt']
sns.pairplot(df_train[cols], size = 2.5)
plt.show()

Although we already know some of the main figures, this mega scatter plot gives us a reasonable idea about variables relationships.

One of the figures we may find interesting is the one between 'TotalBsmtSF' and 'GrLiveArea'. In this figure we can see the dots drawing a linear line, which almost acts like a border. It totally makes sense that the majority of the dots stay below that line. Basement areas can be equal to the above ground living area, but it is not expected a basement area bigger than the above ground living area (unless you're trying to buy a bunker).

The plot concerning 'SalePrice' and 'YearBuilt' can also make us think. In the bottom of the 'dots cloud', we see what almost appears to be a shy exponential function (be creative). We can also see this same tendency in the upper limit of the 'dots cloud' (be even more creative). Also, notice how the set of dots regarding the last years tend to stay above this limit.

Missing Data

Important questions when thinking about missing data:

  • How prevalent is the missing data?
  • Is missing data random or does it have a pattern?

The answer to these questions is important for practical reasons because missing data can imply a reduction of the sample size. This can prevent us from proceeding with the analysis. Moreover, from a substantive perspective, we need to ensure that the missing data process is not biased and hidding an inconvenient truth.

In [19]:
#missing data
total = df_train.isnull().sum().sort_values(ascending = False)
percent = (df_train.isnull().sum()/ df_train.isnull().count()).sort_values(ascending = False)
missing_data = pd.concat([total,percent], axis = 1, keys = ['Total','Percent'])
missing_data.head(20)
Out[19]:
Total Percent
PoolQC 1453 0.995205
MiscFeature 1406 0.963014
Alley 1369 0.937671
Fence 1179 0.807534
FireplaceQu 690 0.472603
LotFrontage 259 0.177397
GarageCond 81 0.055479
GarageType 81 0.055479
GarageYrBlt 81 0.055479
GarageFinish 81 0.055479
GarageQual 81 0.055479
BsmtExposure 38 0.026027
BsmtFinType2 38 0.026027
BsmtFinType1 37 0.025342
BsmtCond 37 0.025342
BsmtQual 37 0.025342
MasVnrArea 8 0.005479
MasVnrType 8 0.005479
Electrical 1 0.000685
Utilities 0 0.000000

Let's analyse this to understand how to handle the missing data.

We'll consider that when more than 15% of the data is missing, we should delete the corresponding variable and pretend it never existed.

This means that we will not try any trick to fill the missing data in these cases. According to this, there is a set of variables (e.g. 'PoolQC', 'MiscFeature', 'Alley', etc.) that we should delete. The point is: will we miss this data? I don't think so. None of these variables seem to be very important, since most of them are not aspects in which we think about when buying a house (maybe that's the reason why data is missing?).

Moreover, looking closer at the variables, we could say that variables like 'PoolQC', 'MiscFeature' and 'FireplaceQu' are strong candidates for outliers, so we'll be happy to delete them.

In what concerns the remaining cases, we can see that 'GarageX' variables have the same number of missing data. I bet missing data refers to the same set of observations (although I will not check it; it's just 5% and we should not spend 20 in5 problems). Since the most important information regarding garages is expressed by 'GarageCars' and considering that we are just talking about 5% of missing data, I'll delete the mentioned 'GarageX' variables. The same logic applies to 'BsmtX' variables.

Regarding 'MasVnrArea' and 'MasVnrType', we can consider that these variables are not essential. Furthermore, they have a strong correlation with 'YearBuilt' and 'OverallQual' which are already considered. Thus, we will not lose information if we delete 'MasVnrArea' and 'MasVnrType'.

Finally, we have one missing observation in 'Electrical'. Since it is just one observation, we'll delete this observation and keep the variable.

In summary, to handle missing data, we'll delete all the variables with missing data, except the variable 'Electrical'. In 'Electrical' we'll just delete the observation with missing data.

In [20]:
#dealing with missing data
df_train = df_train.drop((missing_data[missing_data['Total'] > 1]).index,1)
df_train = df_train.drop(df_train.loc[df_train['Electrical'].isnull()].index)
df_train.isnull().sum().max() #just checking that there's no missing data missing...
Out[20]:
0

OutLiers