meta_pixel
Tapesearch Logo
Log in
Machine Learning Guide

MLA 008 Exploratory Data Analysis (EDA)

Machine Learning Guide

OCDevel

Artificial, Introduction, Learning, Courses, Technology, Ml, Intelligence, Ai, Machine, Education

4.9848 Ratings

🗓️ 26 October 2018

⏱️ 25 minutes

🧾️ Download transcript

Summary

Exploratory data analysis (EDA) sits at the critical pre-modeling stage of the data science pipeline, focusing on uncovering missing values, detecting outliers, and understanding feature distributions through both statistical summaries and visualizations, such as Pandas' info(), describe(), histograms, and box plots. Visualization tools like Matplotlib, along with processes including imputation and feature correlation analysis, allow practitioners to decide how best to prepare, clean, or transform data before it enters a machine learning model.

Links

EDA in the Data Science Pipeline

  • Position in Pipeline: EDA is an essential pre-processing step in the business intelligence (BI) or data science pipeline, occurring after data acquisition but before model training.
  • Purpose: The goal of EDA is to understand the data by identifying:
    • Missing values (nulls)
    • Outliers
    • Feature distributions
    • Relationships or correlations between variables

Data Acquisition and Initial Inspection

  • Data Sources: Data may arrive from various streams (e.g., Twitter, sensors) and is typically stored in structured formats such as databases or spreadsheets.
  • Loading Data: In Python, data is often loaded into a Pandas DataFrame using commands like pd.read_csv('filename.csv').
  • Initial Review:
    • df.info(): Displays data types and counts of non-null entries by column, quickly highlighting missing values.
    • df.describe(): Provides summary statistics for each column, including count, mean, standard deviation, min/max, and quartiles.

Handling Missing Data and Outliers

  • Imputation:
    • Missing values must often be filled (imputed), as most machine learning algorithms cannot handle nulls.
    • Common strategies: impute with mean, median, or another context-appropriate value.
    • For example, missing ages can be filled with the column's average rather than zero, to avoid introducing skew.
  • Outlier Strategy:
    • Outliers can be removed, replaced (e.g., by nulls and subsequently imputed), or left as-is if legitimate.
    • Treatment depends on whether outliers represent true data points or data errors.

Visualization Techniques

  • Purpose: Visualizations help reveal data distributions, outliers, and relationships that may not be apparent from raw statistics.
  • Common Visualization Tools:
    • Matplotlib: The primary Python library for static data visualizations.
    • Visualization Methods:
      • Histogram: Ideal for visualizing the distribution of a single variable (e.g., age), making outliers visible as isolated bars.
      • Box Plot: Summarizes quartiles, median, and range, with 'whiskers' showing min/max; useful for spotting outliers and understanding data spread.
      • Line Chart: Used for time-series data, highlighting trends and anomalies (e.g., sudden spikes in stock price).
      • Correlation Matrix: Visual grid (often of scatterplots) comparing each feature against every other, helping to detect strong or weak linear relationships between features.

Feature Correlation and Dimensionality

  • Correlation Plot:
    • Generated with df.corr() in Pandas to assess linear relationships between features.
    • High correlation between features may suggest redundancy (e.g., number of bedrooms and square footage) and inform feature selection or removal.
  • Limitations:
    • While correlation plots provide intuition, automated approaches like Principal Component Analysis (PCA) or autoencoders are typically superior for feature reduction and target prediction tasks.

Data Transformation Prior to Modeling

  • Scaling:
    • Machine learning models, especially neural networks, often require input features to be scaled (normalized or standardized).
    • StandardScaler (from scikit-learn): Standardizes features, but is sensitive to outliers.
    • RobustScaler: A variant that compresses the influence of outliers, keeping data within interquartile ranges, simplifying preprocessing steps.

Summary of EDA Workflow

  • Initial Steps:
    • Load data into a DataFrame.
    • Examine data types and missing values with df.info().
    • Review summary statistics with df.describe().
  • Visualization:
    • Use histograms and box plots to explore feature distributions and detect anomalies.
    • Leverage correlation matrices to identify related features.
  • Data Preparation:
    • Impute missing values thoughtfully (e.g., with means or medians).
    • Decide on treatment for outliers: removal, imputation, or scaling with tools like RobustScaler.
  • Outcome:
    • Proper EDA ensures that data is cleaned, features are well-understood, and inputs are suitable for effective machine learning model training.

Transcript

Click on a timestamp to play from that location

0:00.0

You're listening to Machine Learning Applied.

0:02.6

This is the second of the visualization episodes.

0:06.4

In this one, we're going to talk about exploratory data analysis, aka EDA, as well as some charting fundamentals.

0:15.2

So exploratory data analysis, I threw that phrase around a lot in the last episode without describing it.

0:22.3

EDA is part of a larger pipeline for your machine learning process or your data science process.

0:30.1

This whole umbrella of what's called business intelligence, B.I, or even just data science. It's sort of the A to Z, the beginning to

0:40.9

end pipeline of what you're working on. The whole reason you have a machine learning model in the

0:45.6

first place is part of a pipeline. The first part of this pipeline is going to be getting your data

0:50.7

from some data source, maybe some data stream like Twitter or some sensors,

0:55.0

and then you're going to maybe convert that into something that can be stored on a database.

0:59.0

You might be cleaning up your data.

1:01.0

You would be visualizing your data and determining how it will fit into your machine learning model.

1:06.0

That's what's called exploratory data analysis, EDA.

1:10.0

EDA is looking at your data, figuring out if there's

1:14.2

holes in your data, the way that's distributed, how you're going to have to fix it up, tidy it up,

1:19.4

et cetera, before it hits the machine learning model. Okay, then it hits the machine learning model.

1:24.7

And then you have some results, maybe some information that's going to go to

1:29.0

the business decision makers so you output results with your machine learning model and then you

1:34.6

might maybe generate some visualizations or reports on those results and then deliver them to

1:39.9

the business people this whole pipeline from beginning to end, it's called business intelligence,

1:44.6

the business intelligence pipeline, B.I. You'll see that word a lot. And we'll talk about B.I.

1:49.3

in a future episode. We'll talk about each of the steps of this pipeline. And sort of all the tasks

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from OCDevel, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of OCDevel and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.