top of page

Exploring the Iris Flower Dataset with Python: A Comprehensive Guide to Data Analysis

Updated: Jan 17


Iris Flower Dataset with Python

What is Iris Flower Dataset?

The Iris Flower Dataset is a well-known and frequently used dataset in the field of machine learning and statistics. The dataset is named after the Iris flowers from which the data was collected.


The Iris flower dataset, a pioneering multivariate dataset introduced by the eminent British statistician and biologist Ronald Fisher in 1936, has been a linchpin in statistical classification techniques and machine learning.


The Iris Flower Dataset is a multivariate dataset that provides measurements for four features of three different species of Iris flowers. The features include:

  1. Sepal Length: The measurement of the sepals in centimeters.

  2. Sepal Width: The width of the sepals in centimeters.

  3. Petal Length: The length of the petals in centimeters.

  4. Petal Width: The width of the petals in centimeters.


These properties serve as fundamental metrics, providing essential insights into the morphological characteristics of Iris flowers across different species.


These measurements were taken from 150 different Iris flowers, with 50 samples from each of the following three species:

  1. Iris Setosa

  2. Iris versicolor

  3. Iris virginica


Iris Flower Dataset with Python

Researchers and practitioners widely use the Iris Flower Dataset to demonstrate and assess the performance of various classification techniques, including but not limited to support vector machines, decision trees, and k-nearest neighbors. Its simplicity, yet richness in information, makes it an excellent starting point for learning and experimenting with different machine learning algorithms.


Iris Flower Dataset with Python

In Python, the Iris Flower Dataset is typically represented as a DataFrame using a library called Pandas. Here's how you can define and work with the Iris Flower Dataset using Python and Pandas:

# Importing necessary libraries 
import pandas as pd 

# Defining the Iris Flower Dataset 
data = { 
	'sepal_length': [5.1, 4.9, 4.7, 4.6, 5.0, ...], # List of sepal lengths 
	'sepal_width': [3.5, 3.0, 3.2, 3.1, 3.6, ...], # List of sepal widths 
	'petal_length': [1.4, 1.4, 1.3, 1.5, 1.4, ...], # List of petal lengths 
	'petal_width': [0.2, 0.2, 0.2, 0.2, 0.2, ...], # List of petal widths 
	'species': ['Iris-setosa', 'Iris-setosa', 'Iris-setosa', 'Iris-setosa', 'Iris-setosa', ...] # List of species 
} 

# Creating a DataFrame from the dictionary 
iris_df = pd.DataFrame(data) 

# Displaying the first few rows of the DataFrame 
print(iris_df.head())

In this example, each column of the DataFrame represents a specific feature of the Iris flowers, and each row represents a different sample. The 'species' column indicates the species of each Iris flower. The data is organized in a tabular format, making it easy to analyze and manipulate using various Python libraries such as Pandas, NumPy, and scikit-learn.


Step-by-Step Exploration of the Iris Flower Dataset with Python


Step 1: Setup in Colab - Importing Libraries and Loading the Dataset:

In this initial step, we leverage the capabilities of Google Colab and import essential Python libraries such as Pandas, Seaborn, and Matplotlib. We then specify the path to the Iris Flower Dataset and load it into a Pandas DataFrame for further analysis.

# Importing necessary libraries in Colab 
import pandas as pd 
import numpy as np
import seaborn as sns 
import matplotlib.pyplot as plt 

# Specifying the path of the dataset 
dataset_path = "path/to/iris_dataset.csv" 

# Reading the dataset into a DataFrame 
df = pd.read_csv(dataset_path)

Output - Iris Flower Dataset with Python

Step 2: Exploring the Dataset - Understanding Variables:

The Iris dataset contains numerical variables (sepal length, sepal width, petal length, and petal width) and a categorical variable denoting the flower species. Recognizing this distinction is crucial, especially when dealing with classification problems in machine learning.

The df.head() command to display the first few rows of the DataFrame (df). The head() method is commonly used in Pandas to retrieve the initial rows of a DataFrame, providing a quick glimpse of the dataset.

# Displaying the first 5 rows of the dataset 
df.head()

The output will show the top rows of the DataFrame, allowing you to inspect the structure and content of the dataset.


Output - Iris Flower Dataset with Python 2

 Types of variables present in the dataset:

  • Numerical Variables: Variables like sepal_length and petal_width are numerical, meaning they represent measurable quantities with a meaningful numeric value.

  • Categorical Variable: The variable species, representing the type of flower, is referred to as a categorical variable. Categorical variables represent categories or groups and often have a limited and fixed set of possible values.


Significance:

Understanding the types of variables is crucial in data analysis and machine learning. It guides the choice of appropriate statistical techniques and machine learning algorithms for analysis and modeling. Numerical variables often involve mathematical operations, while categorical variables may require encoding for analysis and modeling purposes.


Step 3: Descriptive Statistics - Unveiling Key Insights:

Descriptive statistics offer a snapshot of the dataset's characteristics. Utilizing Pandas' describe() method, we reveal statistical measures such as mean, standard deviation, minimum, maximum, and quartiles for each variable.

# Printing descriptive statistics using Pandas' describe method 
df.describe()

The output of df.describe() will include the following statistics for each numerical column in the DataFrame:

  • Count: Number of non-null (non-missing) values.

  • Mean: Mean or average of the values.

  • Std: Standard deviation, a measure of the amount of variation or dispersion.

  • Min: Minimum value in the column.

  • 25%: First quartile, or 25th percentile.

  • 50% (median): Median, or 50th percentile.

  • 75%: Third quartile, or 75th percentile.

  • Max: Maximum value in the column.

output - Iris Flower Dataset with Python 3

Within the dataset, take the example of the sepal_length feature, comprising a total of 150,000 data points. The standard deviation of these values is approximately 0.83. This statistic provides a measure of the amount of variation or dispersion present in the sepal length data.


Quartiles Analysis:

The 25% and 75% ranges are commonly referred to as quartiles. Analyzing these quartiles allows for a deeper understanding of the distribution of the data. It provides insights into the spread and central tendency of the feature, aiding in the identification of potential outliers.


Dataset Information Retrieval:

To gain a comprehensive overview of the dataset, the df.info() command is executed. This command provides valuable information, including the number of non-null entries in each column, data types, and memory usage.

# Displaying information about the dataset 
df.info()

This method is used to obtain a concise summary of the DataFrame, including information about the data types, non-null values, and memory usage.


Output:

  • Data Types: Displays the data type of each column.

  • Non-Null Count: Indicates the number of non-null (non-missing) values in each column.

  • Memory Usage: Provides an estimate of the memory usage by the DataFrame.


Usage:

  • Helps quickly identify the data types present in each column.

  • Assists in detecting missing values by comparing non-null counts.

  • Gives an overview of the overall structure and characteristics of the DataFrame.


Output:

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 150 entries, 0 to 149
Data columns (total 4 columns):
sepal_length    150 non-null float64
sepal_width     150 non-null float64
petal_length    150 non-null float64
petal_width     150 non-null float64
dtypes: float64(4)
memory usage: 4.8 KB

Based on this information, it is observed that there are no rows with empty values in the dataset. Furthermore, numerically existing features are of the float type, contributing to the dataset's robustness and suitability for various analytical tasks.


Step 4: Data Integrity Check - Ensuring Completeness:

Ensuring data integrity is paramount. We check for missing data using Pandas' isna() method, ensuring that our dataset is complete and reliable.

# Checking for missing data in the dataset 
df.isna()

The df.isna() method in Pandas is used to check for missing or NaN (Not a Number) values in a DataFrame. It returns a DataFrame of the same shape as the original, where each element is a boolean value indicating whether the corresponding element in the original DataFrame is missing (True) or not (False).


output - Iris Flower Dataset with Python 5

Output:

The resulting missing_values DataFrame will have the same shape as df, where each element is True if the corresponding element in df is a missing value and False otherwise.


Application:

  • Helps identify the presence of missing values in the dataset.

  • Useful for subsequent steps like data cleaning, imputation, or understanding the overall data quality.

  • Combined with other methods, it facilitates data quality assessment and preprocessing before analysis.


How to check missing data in a DataFrame using the Panda library?


df.isna().any() checks if there is at least one missing value (True) in each column of the DataFrame df.


If there is any missing data in a column, the result for that column will be True. If there is no missing data in a column, the result will be False.


The overall result will be a Series of boolean values, where each value corresponds to whether there is missing data in the respective column.


output - Iris Flower Dataset with Python 6

Step 5: Data Visualization - Insightful Plots:

To enhance our understanding, we employ data visualization techniques. Seaborn and Matplotlib come into play to create informative plots such as scatter plots, pair plots, and box plots, providing visual insights into the relationships within the dataset.

# Data visualization with Seaborn 
sns.pairplot(df, hue='species') 
plt.show()

Conclusion

This detailed analysis demonstrates the power of Python in unraveling the intricacies of the Iris Flower Dataset. By combining data analysis and visualization, we gain a holistic view of the dataset's characteristics and relationships. The Colab notebook containing these steps and visualizations can be accessed through the provided link, facilitating hands-on exploration and further insights

bottom of page