Graph Pipeline Demo¶
DataProfiler can also load and profile graph datasets. Similarly to the rest of DataProfiler profilers, this is split into two components:
GraphData
GraphProfiler
We will demo the use of this graph pipeline.
First, let’s import the libraries needed for this example.
[ ]:
import os
import sys
import pprint
try:
sys.path.insert(0, '..')
import dataprofiler as dp
except ImportError:
import dataprofiler as dp
data_path = "../dataprofiler/tests/data"
We now input our dataset into the generic DataProfiler pipeline:
[ ]:
data = dp.Data(os.path.join(data_path, "csv/graph_data_csv_identify.csv"))
profile = dp.Profiler(data)
report = profile.report()
pp = pprint.PrettyPrinter(sort_dicts=False, compact=True)
pp.pprint(report)
We notice that the Data
class automatically detected the input file as graph data. The GraphData
class is able to differentiate between tabular and graph csv data. After Data
matches the input file as graph data, GraphData
does the necessary work to load the csv data into a NetworkX Graph.
Profiler
runs GraphProfiler
when graph data is input (or when data_type="graph"
is specified). The report()
function outputs the profile for the user.
Profile¶
The profile skeleton looks like this:
profile = {
"num_nodes": ...,
"num_edges": ...,
"categorical_attributes": ...,
"continuous_attributes": ...,
"avg_node_degree": ...,
"global_max_component_size": ...,
"continuous_distribution": ...,
"categorical_distribution": ...,
"times": ...,
}
Description of properties in profile:
num_nodes
: number of nodes in the graphnum_edges
: number of edges in the graphcategorical_attributes
: list of categorical edge attributescontinuous_attributes
: list of continuous edge attributesavg_node_degree
: average degree of nodes in the graphglobal_max_component_size
: size of largest global max component in the graphcontinuous_distribution
: dictionary of statistical properties for each continuous attributecategorical_distribution
: dictionary of statistical properties for each categorical attribute
The continuous_distribution
and categorical_distribution
dictionaries list statistical properties for each edge attribute in the graph:
continuous_distribution = {
"name": ...,
"scale": ...,
"properties": ...,
}
categorical_distribution = {
"bin_counts": ...,
"bin_edges": ...,
}
Description of each attribute:
Continuous distribution:
name
: name of the distributionscale
: negative log likelihood used to scale distributions and compare them inGraphProfiler
properties
: list of distribution props
Categorical distribution:
bin_counts
: histogram bin countsbin_edges
: histogram bin edges
properties
lists the following distribution properties: [optional: shape, loc, scale, mean, variance, skew, kurtosis]. The list can be either 6 length or 7 length depending on the distribution (extra shape parameter):
6 length: norm, uniform, expon, logistic
7 length: gamma, lognorm
gamma: shape=
a
(float)lognorm: shape=
s
(float)
For more information on shape parameters a
and s
: https://docs.scipy.org/doc/scipy/tutorial/stats.html#shape-parameters
Saving and Loading a Profile¶
Below you will see an example of how a Graph Profile can be saved and loaded again.
[ ]:
# The default save filepath is profile-<datetime>.pkl
profile.save(filepath="profile.pkl")
new_profile = dp.GraphProfiler.load("profile.pkl")
new_report = new_profile.report()
[ ]:
pp.pprint(report)
Difference in Data¶
If we wanted to ensure that this new profile was the same as the previous profile that we loaded, we could compare them using the diff functionality.
[ ]:
diff = profile.diff(new_profile)
[ ]:
pp.pprint(diff)
Another use for diff might be to provide differences between training and testing profiles as shown in the cell below. We will use the profile above as the training profile and create a new profile to represent the testing profile
[ ]:
training_profile = profile
testing_data = dp.Data(os.path.join(data_path, "csv/graph-differentiator-input-positive.csv"))
testing_profile = dp.Profiler(testing_data)
test_train_diff = training_profile.diff(testing_profile)
Below you can observe the difference between the two profiles.
[ ]:
pp.pprint(test_train_diff)
Conclusion¶
We have shown the graph pipeline in the DataProfiler. It works similarly to the current DataProfiler implementation.