import pandas as pd import numpy as np
drinks = pd.read_csv('http://bit.ly/drinksbycountry') movies = pd.read_csv('http://bit.ly/imdbratings') orders = pd.read_csv('http://bit.ly/chiporders', sep='\t') orders['item_price'] = orders.item_price.str.replace('$', '').astype('float') stocks = pd.read_csv('http://bit.ly/smallstocks', parse_dates=['Date']) titanic = pd.read_csv('http://bit.ly/kaggletrain') ufo = pd.read_csv('http://bit.ly/uforeports', parse_dates=['Time'])
Sometimes you need to know the pandas version you're using, especially when reading the pandas documentation. You can show the pandas version by typing:
pd.__version__
'0.24.2'
But if you also need to know the versions of pandas' dependencies, you can use the show_versions() function:
show_versions()
pd.show_versions()
INSTALLED VERSIONS ------------------ commit: None python: 3.7.3.final.0 python-bits: 64 OS: Darwin OS-release: 18.6.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 pandas: 0.24.2 pytest: None pip: 19.1.1 setuptools: 41.0.1 Cython: None numpy: 1.16.4 scipy: None pyarrow: None xarray: None IPython: 7.5.0 sphinx: None patsy: None dateutil: 2.8.0 pytz: 2019.1 blosc: None bottleneck: None tables: None numexpr: None feather: None matplotlib: 3.1.0 openpyxl: None xlrd: None xlwt: None xlsxwriter: None lxml.etree: None bs4: None html5lib: None sqlalchemy: None pymysql: None psycopg2: None jinja2: 2.10.1 s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: None gcsfs: None
You can see the versions of Python, pandas, NumPy, matplotlib, and more.
Let's say that you want to demonstrate some pandas code. You need an example DataFrame to work with.
There are many ways to do this, but my favorite way is to pass a dictionary to the DataFrame constructor, in which the dictionary keys are the column names and the dictionary values are lists of column values:
df = pd.DataFrame({'col one':[100, 200], 'col two':[300, 400]}) df
Now if you need a much larger DataFrame, the above method will require way too much typing. In that case, you can use NumPy's random.rand() function, tell it the number of rows and columns, and pass that to the DataFrame constructor:
random.rand()
pd.DataFrame(np.random.rand(4, 8))
That's pretty good, but if you also want non-numeric column names, you can coerce a string of letters to a list and then pass that list to the columns parameter:
pd.DataFrame(np.random.rand(4, 8), columns=list('abcdefgh'))
As you might guess, your string will need to have the same number of characters as there are columns.
Let's take a look at the example DataFrame we created in the last trick:
df
I prefer to use dot notation to select pandas columns, but that won't work since the column names have spaces. Let's fix this.
The most flexible method for renaming columns is the rename() method. You pass it a dictionary in which the keys are the old names and the values are the new names, and you also specify the axis:
rename()
df = df.rename({'col one':'col_one', 'col two':'col_two'}, axis='columns')
The best thing about this method is that you can use it to rename any number of columns, whether it be just one column or all columns.
Now if you're going to rename all of the columns at once, a simpler method is just to overwrite the columns attribute of the DataFrame:
df.columns = ['col_one', 'col_two']
Now if the only thing you're doing is replacing spaces with underscores, an even better method is to use the str.replace() method, since you don't have to type out all of the column names:
str.replace()
df.columns = df.columns.str.replace(' ', '_')
All three of these methods have the same result, which is to rename the columns so that they don't have any spaces:
Finally, if you just need to add a prefix or suffix to all of your column names, you can use the add_prefix() method...
add_prefix()
df.add_prefix('X_')
...or the add_suffix() method:
add_suffix()
df.add_suffix('_Y')
Let's take a look at the drinks DataFrame:
drinks.head()
This is a dataset of average alcohol consumption by country. What if you wanted to reverse the order of the rows?
The most straightforward method is to use the loc accessor and pass it ::-1, which is the same slicing notation used to reverse a Python list:
loc
::-1
drinks.loc[::-1].head()
What if you also wanted to reset the index so that it starts at zero?
You would use the reset_index() method and tell it to drop the old index entirely:
reset_index()
drinks.loc[::-1].reset_index(drop=True).head()
As you can see, the rows are in reverse order but the index has been reset to the default integer index.
Similar to the previous trick, you can also use loc to reverse the left-to-right order of your columns:
drinks.loc[:, ::-1].head()
The colon before the comma means "select all rows", and the ::-1 after the comma means "reverse the columns", which is why "country" is now on the right side.
Here are the data types of the drinks DataFrame:
drinks.dtypes
country object beer_servings int64 spirit_servings int64 wine_servings int64 total_litres_of_pure_alcohol float64 continent object dtype: object
Let's say you need to select only the numeric columns. You can use the select_dtypes() method:
select_dtypes()
drinks.select_dtypes(include='number').head()
This includes both int and float columns.
You could also use this method to select just the object columns:
drinks.select_dtypes(include='object').head()
You can tell it to include multiple data types by passing a list:
drinks.select_dtypes(include=['number', 'object', 'category', 'datetime']).head()
You can also tell it to exclude certain data types:
drinks.select_dtypes(exclude='number').head()
Let's create another example DataFrame:
df = pd.DataFrame({'col_one':['1.1', '2.2', '3.3'], 'col_two':['4.4', '5.5', '6.6'], 'col_three':['7.7', '8.8', '-']}) df
These numbers are actually stored as strings, which results in object columns:
df.dtypes
col_one object col_two object col_three object dtype: object
In order to do mathematical operations on these columns, we need to convert the data types to numeric. You can use the astype() method on the first two columns:
astype()
df.astype({'col_one':'float', 'col_two':'float'}).dtypes
col_one float64 col_two float64 col_three object dtype: object
However, this would have resulted in an error if you tried to use it on the third column, because that column contains a dash to represent zero and pandas doesn't understand how to handle it.
Instead, you can use the to_numeric() function on the third column and tell it to convert any invalid input into NaN values:
to_numeric()
NaN
pd.to_numeric(df.col_three, errors='coerce')
0 7.7 1 8.8 2 NaN Name: col_three, dtype: float64
If you know that the NaN values actually represent zeros, you can fill them with zeros using the fillna() method:
fillna()
pd.to_numeric(df.col_three, errors='coerce').fillna(0)
0 7.7 1 8.8 2 0.0 Name: col_three, dtype: float64
Finally, you can apply this function to the entire DataFrame all at once by using the apply() method:
apply()
df = df.apply(pd.to_numeric, errors='coerce').fillna(0) df
This one line of code accomplishes our goal, because all of the data types have now been converted to float:
col_one float64 col_two float64 col_three float64 dtype: object
pandas DataFrames are designed to fit into memory, and so sometimes you need to reduce the DataFrame size in order to work with it on your system.
Here's the size of the drinks DataFrame:
drinks.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'> RangeIndex: 193 entries, 0 to 192 Data columns (total 6 columns): country 193 non-null object beer_servings 193 non-null int64 spirit_servings 193 non-null int64 wine_servings 193 non-null int64 total_litres_of_pure_alcohol 193 non-null float64 continent 193 non-null object dtypes: float64(1), int64(3), object(2) memory usage: 30.4 KB
You can see that it currently uses 30.4 KB.
If you're having performance problems with your DataFrame, or you can't even read it into memory, there are two easy steps you can take during the file reading process to reduce the DataFrame size.
The first step is to only read in the columns that you actually need, which we specify with the "usecols" parameter:
cols = ['beer_servings', 'continent'] small_drinks = pd.read_csv('http://bit.ly/drinksbycountry', usecols=cols) small_drinks.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'> RangeIndex: 193 entries, 0 to 192 Data columns (total 2 columns): beer_servings 193 non-null int64 continent 193 non-null object dtypes: int64(1), object(1) memory usage: 13.6 KB
By only reading in these two columns, we've reduced the DataFrame size to 13.6 KB.
The second step is to convert any object columns containing categorical data to the category data type, which we specify with the "dtype" parameter:
dtypes = {'continent':'category'} smaller_drinks = pd.read_csv('http://bit.ly/drinksbycountry', usecols=cols, dtype=dtypes) smaller_drinks.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'> RangeIndex: 193 entries, 0 to 192 Data columns (total 2 columns): beer_servings 193 non-null int64 continent 193 non-null category dtypes: category(1), int64(1) memory usage: 2.3 KB
By reading in the continent column as the category data type, we've further reduced the DataFrame size to 2.3 KB.
Keep in mind that the category data type will only reduce memory usage if you have a small number of categories relative to the number of rows.
Let's say that your dataset is spread across multiple files, but you want to read the dataset into a single DataFrame.
For example, I have a small dataset of stock data in which each CSV file only includes a single day. Here's the first day:
pd.read_csv('data/stocks1.csv')
Here's the second day:
pd.read_csv('data/stocks2.csv')
And here's the third day:
pd.read_csv('data/stocks3.csv')
You could read each CSV file into its own DataFrame, combine them together, and then delete the original DataFrames, but that would be memory inefficient and require a lot of code.
A better solution is to use the built-in glob module:
from glob import glob
You can pass a pattern to glob(), including wildcard characters, and it will return a list of all files that match that pattern.
glob()
In this case, glob is looking in the "data" subdirectory for all CSV files that start with the word "stocks":
stock_files = sorted(glob('data/stocks*.csv')) stock_files
['data/stocks1.csv', 'data/stocks2.csv', 'data/stocks3.csv']
glob returns filenames in an arbitrary order, which is why we sorted the list using Python's built-in sorted() function.
sorted()
We can then use a generator expression to read each of the files using read_csv() and pass the results to the concat() function, which will concatenate the rows into a single DataFrame:
read_csv()
concat()
pd.concat((pd.read_csv(file) for file in stock_files))
Unfortunately, there are now duplicate values in the index. To avoid that, we can tell the concat() function to ignore the index and instead use the default integer index:
pd.concat((pd.read_csv(file) for file in stock_files), ignore_index=True)
The previous trick is useful when each file contains rows from your dataset. But what if each file instead contains columns from your dataset?
Here's an example in which the drinks dataset has been split into two CSV files, and each file contains three columns:
pd.read_csv('data/drinks1.csv').head()
pd.read_csv('data/drinks2.csv').head()
Similar to the previous trick, we'll start by using glob():
drink_files = sorted(glob('data/drinks*.csv'))
And this time, we'll tell the concat() function to concatenate along the columns axis:
pd.concat((pd.read_csv(file) for file in drink_files), axis='columns').head()
Now our DataFrame has all six columns.
Let's say that you have some data stored in an Excel spreadsheet or a Google Sheet, and you want to get it into a DataFrame as quickly as possible.
Just select the data and copy it to the clipboard. Then, you can use the read_clipboard() function to read it into a DataFrame:
read_clipboard()
df = pd.read_clipboard() df
Just like the read_csv() function, read_clipboard() automatically detects the correct data type for each column:
Column A int64 Column B float64 Column C object dtype: object
Let's copy one other dataset to the clipboard:
Amazingly, pandas has even identified the first column as the index:
df.index
Index(['Alice', 'Bob', 'Charlie'], dtype='object')
Keep in mind that if you want your work to be reproducible in the future, read_clipboard() is not the recommended approach.
Let's say that you want to split a DataFrame into two parts, randomly assigning 75% of the rows to one DataFrame and the other 25% to a second DataFrame.
For example, we have a DataFrame of movie ratings with 979 rows:
len(movies)
979
We can use the sample() method to randomly select 75% of the rows and assign them to the "movies_1" DataFrame:
sample()
movies_1 = movies.sample(frac=0.75, random_state=1234)
Then we can use the drop() method to drop all rows that are in "movies_1" and assign the remaining rows to "movies_2":
drop()
movies_2 = movies.drop(movies_1.index)
You can see that the total number of rows is correct:
len(movies_1) + len(movies_2)
And you can see from the index that every movie is in either "movies_1":
movies_1.index.sort_values()
Int64Index([ 0, 2, 5, 6, 7, 8, 9, 11, 13, 16, ... 966, 967, 969, 971, 972, 974, 975, 976, 977, 978], dtype='int64', length=734)
...or "movies_2":
movies_2.index.sort_values()
Int64Index([ 1, 3, 4, 10, 12, 14, 15, 18, 26, 30, ... 931, 934, 937, 941, 950, 954, 960, 968, 970, 973], dtype='int64', length=245)
Keep in mind that this approach will not work if your index values are not unique.
Let's take a look at the movies DataFrame:
movies.head()
One of the columns is genre:
movies.genre.unique()
array(['Crime', 'Action', 'Drama', 'Western', 'Adventure', 'Biography', 'Comedy', 'Animation', 'Mystery', 'Horror', 'Film-Noir', 'Sci-Fi', 'History', 'Thriller', 'Family', 'Fantasy'], dtype=object)
If we wanted to filter the DataFrame to only show movies with the genre Action or Drama or Western, we could use multiple conditions separated by the "or" operator:
movies[(movies.genre == 'Action') | (movies.genre == 'Drama') | (movies.genre == 'Western')].head()
However, you can actually rewrite this code more clearly by using the isin() method and passing it a list of genres:
isin()
movies[movies.genre.isin(['Action', 'Drama', 'Western'])].head()
And if you want to reverse this filter, so that you are excluding (rather than including) those three genres, you can put a tilde in front of the condition:
movies[~movies.genre.isin(['Action', 'Drama', 'Western'])].head()
This works because tilde is the "not" operator in Python.
Let's say that you needed to filter the movies DataFrame by genre, but only include the 3 largest genres.
We'll start by taking the value_counts() of genre and saving it as a Series called counts:
value_counts()
counts = movies.genre.value_counts() counts
Drama 278 Comedy 156 Action 136 Crime 124 Biography 77 Adventure 75 Animation 62 Horror 29 Mystery 16 Western 9 Sci-Fi 5 Thriller 5 Film-Noir 3 Family 2 Fantasy 1 History 1 Name: genre, dtype: int64
The Series method nlargest() makes it easy to select the 3 largest values in this Series:
nlargest()
counts.nlargest(3)
Drama 278 Comedy 156 Action 136 Name: genre, dtype: int64
And all we actually need from this Series is the index:
counts.nlargest(3).index
Index(['Drama', 'Comedy', 'Action'], dtype='object')
Finally, we can pass the index object to isin(), and it will be treated like a list of genres:
movies[movies.genre.isin(counts.nlargest(3).index)].head()
Thus, only Drama and Comedy and Action movies remain in the DataFrame.
Let's look at a dataset of UFO sightings:
ufo.head()
You'll notice that some of the values are missing.
To find out how many values are missing in each column, you can use the isna() method and then take the sum():
isna()
sum()
ufo.isna().sum()
City 25 Colors Reported 15359 Shape Reported 2644 State 0 Time 0 dtype: int64
isna() generated a DataFrame of True and False values, and sum() converted all of the True values to 1 and added them up.
Similarly, you can find out the percentage of values that are missing by taking the mean() of isna():
mean()
ufo.isna().mean()
City 0.001371 Colors Reported 0.842004 Shape Reported 0.144948 State 0.000000 Time 0.000000 dtype: float64
If you want to drop the columns that have any missing values, you can use the dropna() method:
dropna()
ufo.dropna(axis='columns').head()
Or if you want to drop columns in which more than 10% of the values are missing, you can set a threshold for dropna():
ufo.dropna(thresh=len(ufo)*0.9, axis='columns').head()
len(ufo) returns the total number of rows, and then we multiply that by 0.9 to tell pandas to only keep columns in which at least 90% of the values are not missing.
len(ufo)
df = pd.DataFrame({'name':['John Arthur Doe', 'Jane Ann Smith'], 'location':['Los Angeles, CA', 'Washington, DC']}) df
What if we wanted to split the "name" column into three separate columns, for first, middle, and last name? We would use the str.split() method and tell it to split on a space character and expand the results into a DataFrame:
str.split()
df.name.str.split(' ', expand=True)
These three columns can actually be saved to the original DataFrame in a single assignment statement:
df[['first', 'middle', 'last']] = df.name.str.split(' ', expand=True) df
What if we wanted to split a string, but only keep one of the resulting columns? For example, let's split the location column on "comma space":
df.location.str.split(', ', expand=True)
If we only cared about saving the city name in column 0, we can just select that column and save it to the DataFrame:
df['city'] = df.location.str.split(', ', expand=True)[0] df
df = pd.DataFrame({'col_one':['a', 'b', 'c'], 'col_two':[[10, 40], [20, 50], [30, 60]]}) df
There are two columns, and the second column contains regular Python lists of integers.
If we wanted to expand the second column into its own DataFrame, we can use the apply() method on that column and pass it the Series constructor:
df_new = df.col_two.apply(pd.Series) df_new
And by using the concat() function, you can combine the original DataFrame with the new DataFrame:
pd.concat([df, df_new], axis='columns')
Let's look at a DataFrame of orders from the Chipotle restaurant chain:
orders.head(10)
Each order has an order_id and consists of one or more rows. To figure out the total price of an order, you sum the item_price for that order_id. For example, here's the total price of order number 1:
orders[orders.order_id == 1].item_price.sum()
11.56
If you wanted to calculate the total price of every order, you would groupby() order_id and then take the sum of item_price for each group:
groupby()
orders.groupby('order_id').item_price.sum().head()
order_id 1 11.56 2 16.98 3 12.67 4 21.00 5 13.70 Name: item_price, dtype: float64
However, you're not actually limited to aggregating by a single function such as sum(). To aggregate by multiple functions, you use the agg() method and pass it a list of functions such as sum() and count():
agg()
count()
orders.groupby('order_id').item_price.agg(['sum', 'count']).head()
That gives us the total price of each order as well as the number of items in each order.
Let's take another look at the orders DataFrame:
What if we wanted to create a new column listing the total price of each order? Recall that we calculated the total price using the sum() method:
sum() is an aggregation function, which means that it returns a reduced version of the input data.
In other words, the output of the sum() function:
len(orders.groupby('order_id').item_price.sum())
1834
...is smaller than the input to the function:
len(orders.item_price)
4622
The solution is to use the transform() method, which performs the same calculation but returns output data that is the same shape as the input data:
transform()
total_price = orders.groupby('order_id').item_price.transform('sum') len(total_price)
We'll store the results in a new DataFrame column called total_price:
orders['total_price'] = total_price orders.head(10)
As you can see, the total price of each order is now listed on every single line.
That makes it easy to calculate the percentage of the total order price that each line represents:
orders['percent_of_total'] = orders.item_price / orders.total_price orders.head(10)
Let's take a look at another dataset:
titanic.head()
This is the famous Titanic dataset, which shows information about passengers on the Titanic and whether or not they survived.
If you wanted a numerical summary of the dataset, you would use the describe() method:
describe()
titanic.describe()
However, the resulting DataFrame might be displaying more information than you need.
If you wanted to filter it to only show the "five-number summary", you can use the loc accessor and pass it a slice of the "min" through the "max" row labels:
titanic.describe().loc['min':'max']
And if you're not interested in all of the columns, you can also pass it a slice of column labels:
titanic.describe().loc['min':'max', 'Pclass':'Parch']
The Titanic dataset has a "Survived" column made up of ones and zeros, so you can calculate the overall survival rate by taking a mean of that column:
titanic.Survived.mean()
0.3838383838383838
If you wanted to calculate the survival rate by a single category such as "Sex", you would use a groupby():
titanic.groupby('Sex').Survived.mean()
Sex female 0.742038 male 0.188908 Name: Survived, dtype: float64
And if you wanted to calculate the survival rate across two different categories at once, you would groupby() both of those categories:
titanic.groupby(['Sex', 'Pclass']).Survived.mean()
Sex Pclass female 1 0.968085 2 0.921053 3 0.500000 male 1 0.368852 2 0.157407 3 0.135447 Name: Survived, dtype: float64
This shows the survival rate for every combination of Sex and Passenger Class. It's stored as a MultiIndexed Series, meaning that it has multiple index levels to the left of the actual data.
It can be hard to read and interact with data in this format, so it's often more convenient to reshape a MultiIndexed Series into a DataFrame by using the unstack() method:
unstack()
titanic.groupby(['Sex', 'Pclass']).Survived.mean().unstack()
This DataFrame contains the same exact data as the MultiIndexed Series, except that now you can interact with it using familiar DataFrame methods.
If you often create DataFrames like the one above, you might find it more convenient to use the pivot_table() method instead:
pivot_table()
titanic.pivot_table(index='Sex', columns='Pclass', values='Survived', aggfunc='mean')
With a pivot table, you directly specify the index, the columns, the values, and the aggregation function.
An added benefit of a pivot table is that you can easily add row and column totals by setting margins=True:
margins=True
titanic.pivot_table(index='Sex', columns='Pclass', values='Survived', aggfunc='mean', margins=True)
This shows the overall survival rate as well as the survival rate by Sex and Passenger Class.
Finally, you can create a cross-tabulation just by changing the aggregation function from "mean" to "count":
titanic.pivot_table(index='Sex', columns='Pclass', values='Survived', aggfunc='count', margins=True)
This shows the number of records that appear in each combination of categories.
Let's take a look at the Age column from the Titanic dataset:
titanic.Age.head(10)
0 22.0 1 38.0 2 26.0 3 35.0 4 35.0 5 NaN 6 54.0 7 2.0 8 27.0 9 14.0 Name: Age, dtype: float64
It's currently continuous data, but what if you wanted to convert it into categorical data?
One solution would be to label the age ranges, such as "child", "young adult", and "adult". The best way to do this is by using the cut() function:
cut()
pd.cut(titanic.Age, bins=[0, 18, 25, 99], labels=['child', 'young adult', 'adult']).head(10)
0 young adult 1 adult 2 adult 3 adult 4 adult 5 NaN 6 adult 7 child 8 adult 9 child Name: Age, dtype: category Categories (3, object): [child < young adult < adult]
This assigned each value to a bin with a label. Ages 0 to 18 were assigned the label "child", ages 18 to 25 were assigned the label "young adult", and ages 25 to 99 were assigned the label "adult".
Notice that the data type is now "category", and the categories are automatically ordered.
Let's take another look at the Titanic dataset:
Notice that the Age column has 1 decimal place and the Fare column has 4 decimal places. What if you wanted to standardize the display to use 2 decimal places?
You can use the set_option() function:
set_option()
pd.set_option('display.float_format', '{:.2f}'.format)
The first argument is the name of the option, and the second argument is a Python format string.
You can see that Age and Fare are now using 2 decimal places. Note that this did not change the underlying data, only the display of the data.
You can also reset any option back to its default:
pd.reset_option('display.float_format')
There are many more options you can specify is a similar way.
The previous trick is useful if you want to change the display of your entire notebook. However, a more flexible and powerful approach is to define the style of a particular DataFrame.
Let's return to the stocks DataFrame:
stocks
We can create a dictionary of format strings that specifies how each column should be formatted:
format_dict = {'Date':'{:%m/%d/%y}', 'Close':'${:.2f}', 'Volume':'{:,}'}
And then we can pass it to the DataFrame's style.format() method:
style.format()
stocks.style.format(format_dict)
Notice that the Date is now in month-day-year format, the closing price has a dollar sign, and the Volume has commas.
We can apply more styling by chaining additional methods:
(stocks.style.format(format_dict) .hide_index() .highlight_min('Close', color='red') .highlight_max('Close', color='lightgreen') )
We've now hidden the index, highlighted the minimum Close value in red, and highlighted the maximum Close value in green.
Here's another example of DataFrame styling:
(stocks.style.format(format_dict) .hide_index() .background_gradient(subset='Volume', cmap='Blues') )
The Volume column now has a background gradient to help you easily identify high and low values.
And here's one final example:
(stocks.style.format(format_dict) .hide_index() .bar('Volume', color='lightblue', align='zero') .set_caption('Stock Prices from October 2016') )
There's now a bar chart within the Volume column and a caption above the DataFrame.
Note that there are many more options for how you can style your DataFrame.
Let's say that you've got a new dataset, and you want to quickly explore it without too much work. There's a separate package called pandas-profiling that is designed for this purpose.
First you have to install it using conda or pip. Once that's done, you import pandas_profiling:
pandas_profiling
import pandas_profiling
Then, simply run the ProfileReport() function and pass it any DataFrame. It returns an interactive HTML report:
ProfileReport()
pandas_profiling.ProfileReport(titanic)
Dataset info
Variables types
Warnings
Age
Cabin
Fare
Parch
SibSp
Ticket
Age Numeric
Quantile statistics
Descriptive statistics
Minimum 5 values
Maximum 5 values
Cabin Categorical
Embarked Categorical
Fare Numeric
Name Categorical, Unique
First 10 values
Last 10 values
Parch Numeric
PassengerId Numeric
Pclass Numeric
Sex Categorical
SibSp Numeric
Survived Boolean
Ticket Categorical