Introduction to Pandas for Data Science

Wasim Alam
14 min readJan 12, 2021

--

What is Pandas?

If you wonder where the name comes from, unfortunately, it is not because the creators liked pandas as a species so much — it is a combination of panel data which has roots in econometry and Python data analysis.

Data analysis was always important, especially for scientists. However, data collection and analysis plays a significant role in business as well. Today we are going to talk about Pandas, which is a Python library with pre-built methods for many applications. Pandas seems to be quite useful for data science operations and additionally, easy to use which means time and effort efficient

Photo by William Iven on Unsplash

Here are just a few of the things that pandas does well:

  • Easy handling of missing data (represented as NaN) in floating point as well as non-floating point data
  • Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
  • Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the user can simply ignore the labels and let Series, DataFrame, etc. automatically align the data for you in computations
  • Powerful, flexible group by functionality to perform split-apply-combine operations on data sets, for both aggregating and transforming data
  • Make it easy to convert ragged, differently-indexed data in other Python and NumPy data structures into DataFrame objects
  • Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
  • Intuitive merging and joining data sets
  • Flexible reshaping and pivoting of data sets
  • Hierarchical labeling of axes (possible to have multiple labels per tick)
  • Robust IO tools for loading data from flat files (CSV and delimited), Excel files, databases, and saving / loading data from the ultrafast HDF5 format
  • Time series-specific functionality: date range generation and frequency conversion, moving window statistics, date shifting and lagging.

Popularity of Pandas

As we learned, Python is the most popular programming language for data analytics, and many of the popular machine learning and visualization libraries are written in Python, including Pandas, Numpy, TensorFlow, Matplotlib, Scikit-learn, and more. In fact, Python ranked 4th in the 2020 StackOverflow survey for the most popular programming language, and it is beloved for its simplicity, easy learning-curve, and improved library support.

Pandas is an important part of data analytics. It ranks 4th for most popular and loved libraries. It also consistently ranks highly for most wanted programming tools, a sure sign that Pandas is a sought-after tool for developers around the world. Learning Pandas is an important step to becoming a data analyst.

First Step: Installing Pandas

You can install Pandas using the built-in Python tool pip and run the following command in your Python IDE.

$ pip install pandas

Pandas Data Structures and Data Types

A data type is like an internal construct that determines how Python will manipulate, use, or store your data. When doing data analysis, it’s important to use the correct data types to avoid errors. Pandas will often correctly infer data types, but sometimes, we need to explicitly convert data. Let’s go over the data types available to us in Pandas, also called dtypes.

  • object: text or mixed numeric or non-numeric values
  • int64: integer numbers
  • bool: true/false vaues
  • float64: floating point numbers
  • category: finite list of text values
  • datetime64: Date and time values
  • timedelta[ns]: differences between two datetimes

A data structure is a particular way of organizing our data. Pandas has two data structures, and all operations are based on those two objects:

  • Series
  • DataFrame

Think of this as a chart for easy storage and organization, where Series are the columns, and the DataFrame is a table composed of a collection of series. Series can be best described as the single column of a 2-D array that can store data of any type. DataFrame is like a table that stores data similar to a spreadsheet using multiple columns and rows. Each value in a DataFrame object is associated with a row index and a column index.

Series: the most important operations

We can get started with Pandas by creating a series. We create series by invoking the pd.Series() method and then passing a list of values. We print that series using the print statement. Pandas will, by default, count index from 0. We then explicitly define those values.

series1 = pd.Series([1,2,3,4])print(series1)

Let’s look at a more complex example. Run the code below.

How does this work? Well, the srs.values function on line 9 returns the values stored in the Series object, and the function srs.index.values on line 13 returns the index values.

Assign names to our values

Pandas will automatically generate our indexes, so we need to define them. Each index corresponds to its value in the Series object. Let’s look at an example where we assign a country name to population growth rates.

How does this work? Two attributes of the Series object are used on line 8 and line 11. The attribute srs.name sets the name of our series object. The attribute srs.index.name then sets the name for the indexes. Pretty simple, right?

Select entries from a Series

To select entries from a Series, we select elements based on the index name or index number.

How does that work? Well, the elements from the Series are selected in 3 ways.

  • On line 9, the element is selected based on the index name.
  • On line 12, the element is selected based on the index number. Keep in mind that index numbers start from 0.
  • On line 15, multiple elements are selected from the Series by selecting multiple index names inside the [].

Drop entries from a Series

Dropping and unwanted index is a common function in Pandas. If the drop(index_name) function is called with a given index on a Series object, the desired index name is deleted.

Here, the output that the ind2 index is dropped. Also, an index can only be dropped by specifying the index name and not the number. So, srs.drop(srs[2]) does not work.

Pretty simple, right? There are many other functions, conditions, and logical operators we can apply to our series object to make productive use of indexes. Some of those functions are:

  • The condition srs[srs == 1.0] will return a series object containing indexes with values equal to 1.0.
  • name : str, optional gives a name to the Series
  • copy : bool, default False allows us to copy data we input
  • The notnull() function will return a series object with indexes assigned to False (for NaN or null values), and the remaining indexes are assigned True
  • and much more

DataFrame: the most important operations

There are several ways to make a DataFrame in Pandas. The easiest way to create one from scratch is to create and print a df.

We can also create a dict and pass our dictionary data to the DataFrame constructor. Say we have some data on vegetable sales and want to organize it by type of vegetable and quantity. Our data would look like this:

data = {
'peppers': [3, 2, 0, 1],
'carrots': [0, 3, 7, 2]
}

And now we pass it to the constructor using a simple command.

quantity = pd.DataFrame(data)quantity

How did that work? Well, each item, or value, in our data will correspond with a column in the DataFrame we created, just like a chart. The index for this DataFrame is listed as numbers, but we can specify them further depending on our needs. Say we wanted to know quantity per month. That would be our new index. We do that using the following command.

quantity = pd.DataFrame(data, index=['June', 'July', 'August', 'September'])quantity

Get info about your data

One of the first commands you run after loading your data is .info(), which provides all the essential information about a dataset.

From that, you can access more information with other operations, like .shape, which outputs a tuple of (rows, columns). This is super useful for telling us the size of our data, especially after we’ve cleaned it. That way, we can know what was removed.

We can also print a dataset’s column names to find typos or formatting inconsistencies. We use the .columns operator to do so. You can then rename your columns easily. On top of that, the .rename() method allows us to rename columns, similar to a Search and Replace function of a Word doc.

Searching and selecting in our DataFrame

We also need to know how to manipulate or access the data in our DataFrame, such as selecting, searching, or deleting data values. You can do this either by column or by row. Let’s see how it’s done. The easiest way to select a column of data is by using brackets [ ]. We can also use brackets to select multiple columns. Say we only wanted to look at June’s vegetable quantity.

quantity.loc['June']

Note: loc and iloc are used for locating data.

.iloc locates by numerical index

.loc locates by the index name. This is similar to list slicing in Python.

Pandas DataFrame object also provides methods to select specific columns. The following example shows how it can be done.

import pandas as pd

df = pd.read_csv(‘cancer_stats.csv’)

print(df.columns) # print columns of DataFrame

print(“\nThe First Column”)

print(df[‘Sex’].head()) # Fetch the sex colum from DataFrame

print(“\nThe type of this column is: “ + str(type(df[‘Sex’])) + “\n”)

print(“\nThe Second Column”)

print(df[‘Under 1’].head()) # Fetch the Under 1 colum from DataFrame

print(“\nThe type of this column is: “ + str(type(df[‘Under 1’])) + “\n”)

print(“\nThe Last Column”)

print(df[‘40–44’].head()) # Fetch the 40–44 colum from DataFrame

print(“\nThe type of this column is: “ + str(type(df[‘40–44’])) + “\n”)

On line 5, the df.columns function displays the names of all columns present. We access a column by its name. On lines 8, 12, and 17, df['column_name'] is used to get the 1st, 2nd, and last column.

Create a new DataFrame from pre-existing columns

We can also grab multiple columns and create a new DataFrame object from it.

import pandas as pd

df = pd.read_csv(‘test.csv’)

print(df.columns)

print(“\nThe original DataFrame:”)

print(df.head())

print(“\nThe new DataFrame with selected columns is:\n”)

new_df = pd.DataFrame(df, columns=[‘Sex’, ‘Under 1’, ‘40–44’])

print(new_df.head())

Create a new DataFrame using API

We first need to understand what all information can be accessed from the API. For that we use the example of the channel Free Code Camp to make the API call and check the information we get.

To access the API response, we use the function call requests.get(url).json() which not only gets the response from the API for the url but also gets the JSON format for it. We then dump the data using dump() method into content so that we can view it in a more presentable view. The output of the code is as follows:

If we look closely at the output, we can see that there is a lot of information that we have received. We get the id, links to various other sections, followers, name, language, status, url, views and much more. Now, we can loop through a list of channels, get information for each channel and compile it into a dataset. I will be using a few properties from this list including _id, display_name, status, followers and views.

Create the dataset

Now that we are aware of what to expect from the API response, let’s start with compiling the data together and creating our dataset. For this blog, we’ll consider a list of channels that I collected online.

We will first start by defining out list of channels in an array. Then for each channel, we’ll use the API to get its information and store each channel’s information inside another array channels_list using the append() method till we get all information collected together in one place. The request response is in JSON format, so to access any key value pair we simply write the key’s name within square brackets after the JSONContent variable. Now, we use the pandas library to convert this array into a pandas Dataframe using the method DataFrame() provided in pandas library. A dataframe is a representation of the data in a tabular form similar to a table, where data is expressed in terms of rows and columns. This dataframe allows fast manipulation of data using various methods.

The pandas sample() method displays randomly selected rows of the dataframe. In this method, we pass the number of rows we wish to show. Here, let’s display 5 rows.

dataset.sample(5)

On close inspection, we see that the dataset has two minor problems. Let’s address them one by one.

  1. Headings: Presently, the headings are numbers and unreflective of the data each column represents. It might seem less important with this dataset because it has only a few columns. However, when you’ll explore datasets with 100s of columns, this step will become really important. Here, we define the columns using the columns() method provided by pandas. In this case, we explicitly defined the headings but in certain cases, you can pick up the keys as headings directly.
  2. None/Null/Blank Values: Some of the rows will have missing values. In such cases, we’ll have two options. We can either remove the complete row where any value is blank or we can input some carefully selected value in the blank spaces. Here, the status column will have None in some cases. We’ll remove these rows by using the method dropna(axis = 0, how = 'any', inplace = True) which drops rows with blank values in the dataset itself. Then, we change the index of the numbers from 0 to the length of the dataset using the method RangeIndex(len(dataset.index)).
Add column headings and update index

Export Dataset

Our dataset is now ready, and can be exported to an external file. We use the to_csv() method. We define two paramteres. The first parameter refers to the name of the file. The second parameter is a boolean that represents if the first column in the exported file will have the index or not. We now have a .CSV file with the dataset we created.

Dataset.csv

Reindex data in a DataFrame

We can also reindex the data either by the indexes themselves or the columns. Reindexing with reindex() allows us to make changes without messing up the initial setting of the objects.

Note: The rules for reindexing are the same for Series and DataFrame objects.

#importing pandas in our program

import pandas as pd

# Defining a series object

srs1 = pd.Series([11.9, 36.0, 16.6, 21.8, 34.2], index = [‘China’, ‘India’, ‘USA’, ‘Brazil’, ‘Pakistan’])

# Set Series name

srs1.name = “Growth Rate”

# Set index name

srs1.index.name = “Country”

srs2 = srs1.reindex([‘China’, ‘India’, ‘Malaysia’, ‘USA’, ‘Brazil’, ‘Pakistan’, ‘England’])

print(“The series with new indexes is:\n”,srs2)

srs3 = srs1.reindex([‘China’, ‘India’, ‘Malaysia’, ‘USA’, ‘Brazil’, ‘Pakistan’, ‘England’], fill_value=0)

print(“\nThe series with new indexes is:\n”,srs3)

How did that work? Well, on line 11, the indexes are changed. The new index name is added between Row2 and Row4. One line 14, the columns keyword should be specifically used to reindex the columns of DataFrame. The rules are the same as for the indexes. NaN values were assigned to the whole column by default.

How to read or import Pandas data

It is quite easy to read or import data from other files using the Pandas library. In fact, we can use various sources, such as CSV, JSON, or Excel to load our data and access it. Let’s take a look at one of example.

Reading and importing data from CSV files

We can import data from a CSV file, which is common practice for Pandas users. We simply create or open our CSV file, copy the data, paste it in our Notepad, and save it in the same directory that houses your Python scripts. You then use a bit of code to read the data using the read_csv function build into Pandas.

import pandas as pd
data = pd.read_csv('vegetables.csv')
print(data)

read_csv will generate the index column as a default, so we need to change this for the first column is the index column. We can do this by passing the parameter index_col to tell Pandas which column to index.

data = pd.read_csv("data.csv", index_col=0)

Once we’ve used Pandas to sort and clean data, we can then save it back as the original file with simple commands. You only have to input the filename and extension. How simple!

df.to_csv('new_vegetables.csv')

Data Wrangling with Pandas

Once we have our data, we can use data wrangling processes to manipulate and prepare data for the analysis. The most common data wrangling processes are merging, concatenation, and grouping. Let’s get down the basics of each of those.

Merging with Pandas

Merging is used when we want to collect data that shares a key variable but are located in different DataFrames. To merge DataFrames, we use the merge() function. Say we have df1 and df2.

import pandas as pdd = {
'subject_id': ['1', '2', '3', '4', '5'],
'student_name': ['Mark', 'Khalid', 'Deborah', 'Trevon', 'Raven']
}
df1 = pd.DataFrame(d, columns=['subject_id', 'student_name'])
print(df1)
import pandas as pddata = {
'subject_id': ['4', '5', '6', '7', '8'],
'student_name': ['Eric', 'Imani', 'Cece', 'Darius', 'Andre']
}
df2 = pd.DataFrame(data, columns=['subject_id', 'student_name'])
print(df2)

So, how do we merge them? It’s simple: with the merge() function!

pd.merge(df1, df2, on='subject_id')

Grouping with Pandas

Grouping is how we categorize our data. If a value occurs in multiple rows of a single column, the data related to that value in other columns can be grouped together. Just like with merging, it’s more simple than it sounds. We use the groupby function. Look at this example.

#import pandas library

import pandas as pd

raw =

{

‘Name’: [‘Darell’, ‘Darell’, ‘Lilith’, ‘Lilith’, ‘Tran’, ‘Tran’, ‘Tran’,

‘Tran’, ‘John’, ‘Darell’, ‘Darell’, ‘Darell’],

‘Position’: [2, 1, 1, 4, 2, 4, 3, 1, 3, 2, 4, 3],

‘Year’: [2009, 2010, 2009, 2010, 2010, 2010, 2011, 2012, 2011, 2013, 2013, 2012],

‘Marks’:[408, 398, 422, 376, 401, 380, 396, 388, 356, 402, 368, 378]

}

df = pd.DataFrame(raw)

group = df.groupby(‘Year’)

print(group.get_group(2011))

Concatenation

Concatenation is a long word that means to add a set of data to another. We use the concat() function to do so. To clarify the difference between merge and concatenation, merge() combines data on shared columns, while concat() combines DataFrames across columns or rows.

print(pd.concat([df1, df2]))

Pretty simple, right? Some other common data wrangling processes that you should know are:

  • Mapping data and finding duplicates
  • Finding outliers in data
  • Data Aggregation
  • Reshaping data
  • Replace & rename
  • and more

Thank you for reading my blog, keep learning!!!!

REFRENCES:

https://en.wikipedia.org/wiki/Pandas_(software)

https://www.tutorialspoint.com/python_data_science/python_data_aggregation.htm

https://www.educative.io/edpresso/what-is-pandas-in-python

http://www.gregreda.com/2013/10/26/intro-to-pandas-data-structures/

--

--

Wasim Alam
Wasim Alam

No responses yet