Online Magazine
DataRobot and Domino Data Lab on the test bench

Are you a code-savvy data scientist or someone with only little coding experience? Either way, a machine learning platform can help you bring your ML solution into production. But which one? Let’s check out the two platforms DataRobot and Domino Data Lab.
by Marcel Moldenhauer

After having found out why ML platforms are important and how to asses them in order to choose the platform best suited to your use case, let’s take a closer look at two platforms that make it easy to scale ML models and bring them into production: DataRobot and Domino Data Lab.
While both platforms aim to simplify the process of scaling ML models, their approach and target audience are rather different. DataRobot is all around AutoML and focuses on no-code solutions. In contrast, Domino is made for coders who want to have full control over their model and do not need many automated features or services. Let us explore in more detail how this is incorporated in the different components of the platforms.
Figure 1: DataRobot and Domino Data Lab in direct comparison.
1. Data ingestion and storage
Both platforms can load data from a wide range of sources, including databases and storages in AWS, Azure and GCP. For the latter, only DataRobot offers out-of-the-box connectors. Batch loading is easily possible, but neither of the two platforms offers a sophisticated option for streaming input data.
DataRobot promotes the use of its separate service Praxata for data wrangling. Praxata offers many useful tools to transform and clean data without any coding. However, since this is a separate product, we will not go into any details here. Considering only the core product of DataRobot without Praxata, the options to transform data are limited. Hence, data quality must already be on a high-level.
In Domino, you can do any data wrangling which you can do in a common programming language like Python or R, but also more exotic languages like SAS and Matlab, by booting a workspace and coding your necessary transformations. While this is great for code-savvy data scientists, what is missing is an option to run quick transformations in a graphical interface without creating a workspace. Another feature that Domino lacks is an automated quality check. To detect outliers and missing values or simply see the distribution of a variable, you first need to start yet another workspace and do some coding.
2. Experimentation zone
For experimentation, both platforms come with a well-developed and mature but fundamentally different offering. DataRobot’s experimentation zone is built around a graphical interface in which you can run a selection of different models in a few clicks. Feature engineering is done in an automated way as part of the general model training without providing further details to the user. DataRobot heavily relies on AutoML to suggest the most suitable model, and it presents the performance based on a selected metric on a leader board. From there, you can adjust the models, tune hyperparameters and compare the performance against the original model. The user interface also comes with handy ways to visualize the models and generate insights from them. This is particularly helpful when working on explainable AI topics where you don’t just want to know what the outcome is, but also why the ML model predicts it that way.
In contrast, the experiments in Domino are run in a coding environment. You can start a workspace of your choice, like Jupyter Notebook, RStudio or VSCode and create environments in a programming language you are familiar with. You can save and export the results of the model runs as well as the models itself just like you would do when working on your local machine. The main advantage over a local machine is that your workspace comes as a Docker container run on a provided cluster and with flexible as well as scalable compute power. Additionally, Docker containers and environments can easily be shared and others can collaborate on your experiments without having to worry about installing the right packages and dependencies. Domino also allows the execution of code as part of a so-called experiment. This enables you to choose the infrastructure, supply hyperparameters and persist all tracked artifacts while running the code. Furthermore, it helps to organize many different experiment runs in a single interface.
LOOKING FOR MORE AI STORIES?
Find out here how you can build an AI that detects pneumonia.
Listen to our podcast on how AI helps fix road damage.
Discover how AI keeps satellites from crashing into space debris.
3. Continuous integration
The integration process in DataRobot is designed for people who might not have heard much of CI/CD, and as such it does not offer real CI/CD pipelines. Instead, newly trained models are sent to DataRobot MLOps, and an admin can check and approve the new models directly from the UI. Externally developed models can be imported into the DataRobot platform via a code wrapper in Python. Thus, users can make use of DataRobot MLOps. There is no built-in feature store, but you can achieve very basic functionality by saving and sharing Spark SQL queries as part of the DataRobot UI. Models can be stored in the model registry and shared with other people. However, it is not possible to easily export models for usage outside of DataRobot.
Domino comes with a deep Git integration. This makes it easy to run standard Git CI/CD features. For example, you can create different branches and assign reviewers to your changes. However, Domino offers neither a built-in CI/CD process nor an integrated testing framework. Thus, all CI/CD steps must be covered by additional tools and services outside of Domino to enable a complete CI/CD process. Unlike DataRobot, Domino allows to export models as Docker images and run them outside of Domino. There is again no built-in feature store, but external integration is possible. Since Domino has no model store either, users need to do the logging of model artifact metadata manually.
4. Industrialization zone
Retraining models is where it gets tricky in DataRobot. If you want to retrain a model on new data, you need to go back to the experimentation zone, open your model from the model registry and manually re-run it. As continuous retraining is crucial when industrializing models, this is a point in which DataRobot certainly has room for improvement. Monitoring functionalities are well covered, and when your models behave unexpectedly, you can visualize all important metrics and create custom notification rules via DataRobot MLOps. The only thing that is missing in terms of monitoring is a comprehensive logging.
Domino does not come with a fully automated retraining feature either, but it is easy to schedule a notebook or script for continuous retraining. The industrialization zone shows that Domino is made for engineering-heavy developers and data scientists. You can implement all necessities for retraining and validating models, but you should not expect a one-click solution. For instance, you can code validation steps into your script and let them be executed every time a model is trained. Model monitoring and logging is possible in Domino, but unlike in DataRobot, there are no built-in dashboards to show the performance of your models.
5. Data presentation
To bring your models from DataRobot to the outside world, you can deploy them as a REST API. There is also an option to create very basic web apps, but they are mainly used for scoring models. One highlight of DataRobot is the so-called challenger mode in which different models can be run against each other to compare the performance and check if a new model is better than a baseline model. Containerization is still in beta mode, and although you might see progress in that field soon, you should not rely on containerized models yet. To present results, DataRobot offers an integration with Tableau, Alteryx as well as Excel. The REST APIs can be integrated into any code to include the models in custom-built software solutions.
Models from Domino can also be deployed as REST APIs or web apps. One advantage of Domino is the built-in containerization. Every environment comes as a Docker container that is automatically versioned by the platform and can easily be shared and used as a microservice outside of Domino. A feature that is still missing is an easy-to-use testing framework. There is no built-in tool for either A/B testing or canary releases in Domino.
WANT TO KNOW WHAT OUR EXPERTS THINK OF OTHER ML PLATFORMS?
In this article, Dataiku and H2O Driverless AI & MLOps are put to the test.
Your key takeaway
The two platforms presented in this article both offer good solutions to easily build and test machine learning models and, more importantly, to scale and industrialize them. When it comes to choosing one platform over the other, it is all about your preferred approach to building models:
- DataRobot is the right choice for you, if you have little to no coding experience, prefer to work with graphical interfaces or want to quickly prototype on a dataset.
The downside of DataRobot is that it offers only a limited number of algorithms for ML models. Especially if you want to use state-of-the art NLP or computer vision models, you should check if DataRobot supports them. Nevertheless, when working with structured data, it is a solid platform with good collaboration options. - Domino is made for you, if you want to code models yourself and have full control over them.
This platform is particularly well suited when several code-savvy data scientists want to collaborate on a project. It stands out with its integration with Jira which makes it easy for IT and business stakeholders to track the development of ML models and data science use cases. The Git integration as well as the built-in capability of containerization allow an easy sharing of models. Flexibility is key to Domino, however, it comes at a price: there are no one-click solutions, neither for experimentation nor deployment nor AutoML services.
