View this notebook on GitHub

Structured Profilers

Data profiling - is the process of examining a dataset and collecting statistical or informational summaries about said dataset.

The Profiler class inside the DataProfiler is designed to generate data profiles via the Profiler class, which ingests either a Data class or a Pandas DataFrame.

Currently, the Data class supports loading the following file formats:

  • Any delimited (CSV, TSV, etc.)

  • JSON object

  • Avro

  • Parquet

  • Text files

  • Pandas Series/Dataframe

Once the data is loaded, the Profiler can calculate statistics and predict the entities (via the Labeler) of every column (csv) or key-value (JSON) store as well as dataset wide information, such as the number of nulls, duplicates, etc.

This example will look at specifically the structured data types for structured profiling.

Reporting

One of the primary purposes of the Profiler are to quickly identify what is in the dataset. This can be useful for analyzing a dataset prior to use or determining which columns could be useful for a given purpose.

In terms of reporting, there are multiple reporting options:

  • Pretty: Floats are rounded to four decimal places, and lists are shortened.

  • Compact: Similar to pretty, but removes detailed statistics such as runtimes, label probabilities, index locations of null types, etc.

  • Serializable: Output is json serializable and not prettified

  • Flat: Nested Output is returned as a flattened dictionary

The Pretty and Compact reports are the two most commonly used reports and includes global_stats and data_stats for the given dataset. global_stats contains overall properties of the data such as number of rows/columns, null ratio, duplicate ratio. data_stats contains specific properties and statistics for each column file such as min, max, mean, variance, etc.

For structured profiles, the report looks like this:

"global_stats": {
    "samples_used": int,
    "column_count": int,
    "row_count": int,
    "row_has_null_ratio": float,
    "row_is_null_ratio": float,
    "unique_row_ratio": float,
    "duplicate_row_count": int,
    "file_type": string,
    "encoding": string,
},
"data_stats": [
    {
        "column_name": string,
        "data_type": string,
        "data_label": string,
        "categorical": bool,
        "order": string,
        "samples": list(str),
        "statistics": {
            "sample_size": int,
            "null_count": int,
            "null_types": list(string),
            "null_types_index": {
                string: list(int)
            },
            "data_type_representation": [string, list(string)],
            "min": [null, float],
            "max": [null, float],
            "mean": float,
            "variance": float,
            "stddev": float,
            "histogram": {
                "bin_counts": list(int),
                "bin_edges": list(float),
            },
            "quantiles": {
                int: float
            }
            "vocab": list(char),
            "avg_predictions": dict(float),
            "data_label_representation": dict(float),
            "categories": list(str),
            "unique_count": int,
            "unique_ratio": float,
            "precision": {
                'min': int,
                'max': int,
                'mean': float,
                'var': float,
                'std': float,
                'sample_size': int,
                'margin_of_error': float,
                'confidence_level': float
            },
            "times": dict(float),
            "format": string
        }
    }
]

In the example, the compact format of the report is used to shorten the full list of the results.

[ ]:
import os
import sys
import json

try:
    sys.path.insert(0, '..')
    import dataprofiler as dp
except ImportError:
    import dataprofiler as dp

data_path = "../dataprofiler/tests/data"

# remove extra tf loggin
import tensorflow as tf
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
[ ]:
data = dp.Data(os.path.join(data_path, "csv/aws_honeypot_marx_geo.csv"))
profile = dp.Profiler(data)

# Compact - A high level view, good for quick reviews
report  = profile.report(report_options={"output_format":"compact"})
print(json.dumps(report, indent=4))

It should be noted, in addition to reading the input data from multiple file types, DataProfiler allows the input data as a dataframe. To get more results related to detailed predictions at the entity level from the DataLabeler component or histogram results, the format pretty should be used.

[ ]:
# run data profiler and get the report
import pandas as pd
my_dataframe = pd.DataFrame([[1, 2.0],[1, 2.2],[-1, 3]], columns=["col_int", "col_float"])
profile = dp.Profiler(my_dataframe)

report  = profile.report(report_options={"output_format":"pretty"})
print(json.dumps(report, indent=4))

Profiler Type

The profiler will infer what type of statistics to generate (structured or unstructured) based on the input. However, you can explicitly specify profile type as well. Here is an example of the the profiler explicitly calling the structured profile.

[ ]:
data = dp.Data(os.path.join(data_path, "csv/aws_honeypot_marx_geo.csv"))
profile = dp.Profiler(data, profiler_type='structured')

# print the report using json to prettify.
report = profile.report(report_options={"output_format": "pretty"})
print(json.dumps(report, indent=4))

Profiler options

The DataProfiler has the ability to turn on and off components as needed. This is accomplished via the ProfilerOptions class.

For example, if a user doesn’t require histogram information they may desire to turn off the histogram functionality. Simialrly, if a user is looking for a more accurate labeling, they can increase the samples used to label.

Below, let’s remove the histogram and increase the number of samples to the labeler component (1,000 samples).

Full list of options in the Profiler section of the DataProfiler documentation.

[ ]:
data = dp.Data(os.path.join(data_path, "csv/diamonds.csv"))

profile_options = dp.ProfilerOptions()

# Setting multiple options via set
profile_options.set({ "histogram.is_enabled": False, "int.is_enabled": False})

# Set options via directly setting them
profile_options.structured_options.data_labeler.max_sample_size = 1000

profile = dp.Profiler(data, options=profile_options)
report  = profile.report(report_options={"output_format":"compact"})

# Print the report
print(json.dumps(report, indent=4))

Updating Profiles

Beyond just profiling, one of the unique aspects of the DataProfiler is the ability to update the profiles. To update appropriately, the schema (columns / keys) must match appropriately.

[ ]:
# Load and profile a CSV file
data = dp.Data(os.path.join(data_path, "csv/sparse-first-and-last-column-header-and-author.txt"))
profile = dp.Profiler(data)

# Update the profile with new data:
new_data = dp.Data(os.path.join(data_path, "csv/sparse-first-and-last-column-skip-header.txt"))
# new_data = dp.Data(os.path.join(data_path, "iris-utf-16.csv")) # will error due to schema mismatch
profile.update_profile(new_data)

# Take a peek at the data
print(data.data)
print(new_data.data)

# Report the compact version of the profile
report  = profile.report(report_options={"output_format":"compact"})
print(json.dumps(report, indent=4))

Merging Profiles

Merging profiles are an alternative method for updating profiles. Particularly, multiple profiles can be generated seperately, then added together with a simple + command: profile3 = profile1 + profile2

[ ]:
# Load a CSV file with a schema
data1 = dp.Data(os.path.join(data_path, "csv/sparse-first-and-last-column-header-and-author.txt"))
profile1 = dp.Profiler(data1)

# Load another CSV file with the same schema
data2 = dp.Data(os.path.join(data_path, "csv/sparse-first-and-last-column-skip-header.txt"))
profile2 = dp.Profiler(data2)

# Merge the profiles
profile3 = profile1 + profile2

# Report the compact version of the profile
report  = profile3.report(report_options={"output_format":"compact"})
print(json.dumps(report, indent=4))

As you can see, the update_profile function and the + operator function similarly. The reason the + operator is important is that it’s possible to save and load profiles, which we cover next.

Differences in Data

Can be appliied to both structured and unstructured datasets.

Such reports can provide details on the differences between training and validation data like in this pseudo example:

profiler_training = dp.Profiler(training_data)
profiler_testing = dp.Profiler(testing_data)

validation_report = profiler_training.diff(profiler_testing)
[ ]:
from pprint import pprint

# structured differences example
data_split_differences = profile1.diff(profile2)
pprint(data_split_differences)

Graphing a Profile

We’ve also added the ability to generating visual reports from a profile.

The following plots are currently available to work directly with your profilers:

  • missing values matrix

  • histogram (numeric columns only)

[ ]:
import matplotlib.pyplot as plt


# get the data
data_folder = "../dataprofiler/tests/data"
data = dp.Data(os.path.join(data_folder, "csv/aws_honeypot_marx_geo.csv"))

# profile the data
profile = dp.Profiler(data)
[ ]:
# generate a missing values matrix
fig = plt.figure(figsize=(8, 6), dpi=100)
fig = dp.graphs.plot_missing_values_matrix(profile, ax=fig.gca(), title="Missing Values Matrix")
[ ]:
# generate histogram of all int/float columns
fig = dp.graphs.plot_histograms(profile)
fig.set_size_inches(8, 6)
fig.set_dpi(100)

Saving and Loading a Profile

Not only can the Profiler create and update profiles, it’s also possible to save, load then manipulate profiles.

[ ]:
# Load data
data = dp.Data(os.path.join(data_path, "csv/names-col.txt"))

# Generate a profile
profile = dp.Profiler(data)

# Save a profile to disk for later (saves as pickle file)
profile.save(filepath="my_profile.pkl")

# Load a profile from disk
loaded_profile = dp.Profiler.load("my_profile.pkl")

# Report the compact version of the profile
report = profile.report(report_options={"output_format":"compact"})
print(json.dumps(report, indent=4))

With the ability to save and load profiles, profiles can be generated via multiple machines then merged. Further, profiles can be stored and later used in applications such as change point detection, synthetic data generation, and more.

[ ]:
# Load a multiple files via the Data class
filenames = ["csv/sparse-first-and-last-column-header-and-author.txt",
             "csv/sparse-first-and-last-column-skip-header.txt"]
data_objects = []
for filename in filenames:
    data_objects.append(dp.Data(os.path.join(data_path, filename)))


# Generate and save profiles
for i in range(len(data_objects)):
    profile = dp.Profiler(data_objects[i])
    profile.save(filepath="data-"+str(i)+".pkl")


# Load profiles and add them together
profile = None
for i in range(len(data_objects)):
    if profile is None:
        profile = dp.Profiler.load("data-"+str(i)+".pkl")
    else:
        profile += dp.Profiler.load("data-"+str(i)+".pkl")


# Report the compact version of the profile
report = profile.report(report_options={"output_format":"compact"})
print(json.dumps(report, indent=4))
[ ]: