View this notebook on GitHub or run it yourself on Binder!


Distinguish Failed Experiments

Modeling runs can fail for a number of reasons. When logging with rubicon_ml, a failed run may result in an empty or incomplete experiment. In this example, we’ll walk through how to handle such experiments.

First lets simulate the problem. To do this we’ll create an estimator that will fail on it’s fit() 30% of the time. We will consider any pipeline that has a learned attribute self.state_ to have “succeeded,” and any that does not to have “failed.”

[1]:
from sklearn.base import BaseEstimator
import random

class BadEstimator(BaseEstimator):
    def __init__(self):
        super().__init__()
        self.knn = KNeighborsClassifier(n_neighbors=2)
    def fit(self, X, y):
        self.knn.fit(X, y)
        output=random.random()
        if output>.3:
            self.state_=output
    def score(self, X):
        knn_score = self.knn.score(X)
        return knn_score

Next, lets create a rubicon_ml project to log our experimenation to.

[2]:
from rubicon_ml.sklearn import make_pipeline
from sklearn.neighbors import KNeighborsClassifier
from sklearn.impute import SimpleImputer
from rubicon_ml import Rubicon

random.seed(17)

rubicon = Rubicon(
    persistence="memory",
)
project = rubicon.get_or_create_project(name="Failed Experiments")

Now let’s create a rubicon_ml.sklearn pipeline with this sporadically failing estimator and attempt to fit the pipeline twenty times. Tag any experiment that doesn’t have a valid state_ attribute for failures with exp.add_tags(["failed"]) and passed experiments with exp.add_tags(["passed"]).

[3]:
X = [[1], [1], [1], [1]]
y = [1, 1, 1, 1]
for _ in range(20):
    pipe=make_pipeline(project, SimpleImputer(strategy="mean"),BadEstimator())
    pipe.fit(X,y)
    if not hasattr(pipe["badestimator"],"state_"):
         pipe.experiment.add_tags(["failed"])
    else:
         pipe.experiment.add_tags(["passed"])

Finally, we can now retrieve all our failed experiments by passing the tags=["failed"] to project.experiments().

[4]:
for exp in project.experiments(tags=["failed"]):
    print(exp)
Experiment(project_name='Failed Experiments', id='375a55ec-9e50-4c59-86c8-e06be471d45e', name='RubiconPipeline experiment', description=None, model_name=None, branch_name=None, commit_hash=None, training_metadata=None, tags=['failed'], created_at=datetime.datetime(2022, 5, 10, 14, 50, 44, 669757))
Experiment(project_name='Failed Experiments', id='fce82fb6-58d8-42df-a40b-304bc83826b5', name='RubiconPipeline experiment', description=None, model_name=None, branch_name=None, commit_hash=None, training_metadata=None, tags=['failed'], created_at=datetime.datetime(2022, 5, 10, 14, 50, 44, 676902))
Experiment(project_name='Failed Experiments', id='912b9efe-db1f-4ff2-b7c3-51d23bc60acf', name='RubiconPipeline experiment', description=None, model_name=None, branch_name=None, commit_hash=None, training_metadata=None, tags=['failed'], created_at=datetime.datetime(2022, 5, 10, 14, 50, 44, 678565))
Experiment(project_name='Failed Experiments', id='75f4d429-b67e-4d16-a634-700b600224fc', name='RubiconPipeline experiment', description=None, model_name=None, branch_name=None, commit_hash=None, training_metadata=None, tags=['failed'], created_at=datetime.datetime(2022, 5, 10, 14, 50, 44, 683442))
Experiment(project_name='Failed Experiments', id='e0ca4d92-5c37-4118-bfc8-96253fe390c9', name='RubiconPipeline experiment', description=None, model_name=None, branch_name=None, commit_hash=None, training_metadata=None, tags=['failed'], created_at=datetime.datetime(2022, 5, 10, 14, 50, 44, 697318))

We can also see that the pipeline passed ~70% of the time and ~30% of the time.

[5]:
len(project.experiments(tags=["failed"]))
[5]:
5
[6]:
len(project.experiments(tags=["passed"]))
[6]:
15