Press release
Tale of 2 Model Monitoring Solutions: Amazon SageMaker vs Databricks ML
Model monitoring is crucial in the lifecycle of machine learning models, especially for models deployed in production environments. Model monitoring is not just a "nice-to-have" but is essential to ensure the models' robustness, accuracy, fairness, and reliability in real-world applications. Without monitoring, model predictions can be unreliable, or even detrimental to the business or end-users. As a model builder, how often have you thought about how models' behavior will change over time? In my professional life, I have seen many production systems managing model retraining life cycle using heuristic, gut feel or scheduled basis, either leading to the wastage of precious resources or performing retraining too late.This is a ripe problem space as many models have been deployed in production. Hence there are many point solutions such as Great Expectations, Neptune.ai, Fiddler.ai who all boast really cool features either in terms of automatic metrics computation, differentiated statistical methods or Responsible AI hype that has become a real need of time (Thanks to ChatGPT and LLMs). In this Op-ed, I would like to touch upon two systems that I am familiar with and are widely used.
Amazon SageMaker Model Monitor
Amazon SageMaker is AWS's flagship fully managed ML service to Build, Train, Deploy & "Monitor" Machine Learning models. The service provides click through experience for set up using SageMaker Studio or API experience using SageMaker SDK. SageMaker assumes you to have clean datasets for training and can capture inference request/response based on user defined time interval. The system works for model monitoring if models are the problem, BUT What if Data that is fed to the model is a problem or a pipeline well upstream in ETL pipeline is a problem. AWS provides multiple Data Lake architectures and patterns to stitch end-2-end data and AI systems together but tracking data lineage is easy if not impossible.
The monitoring solution is flexible thanks to SageMaker processing job which is underlying mechanism to execute underlying metrics. SageMaker processing also lets you build your custom container. SageMaker model monitoring is integrated with Amazon SageMaker Clarify and can provide Bias Drift which is important for Responsible AI. Overall SageMaker monitoring does a decent job of alerting when model drifts.
Databricks Lakehouse Monitoring
Let's look at the second contender. Databricks is a fully managed Data and AI platform available across all major clouds and also boasts millions of downloads of MLFlow OSS. I have recently come across Databricks Lakehouse Monitoring which IMO is a really cool paradigm of Monitoring your Data assets.
Let me explain why you should care if you are an ML Engineer or Data Scientist?
Let's say you have built a cool customer segmentation model and deployed it in production. You have started monitoring the model using one of the cool bespoke tools I mentioned earlier which may pop up an alert blaming a Data field. Now What?
✔ How do you track where that field came from cobweb of data ETL pipeline?
✔ How do you find the root cause of the drift?
✔ How do you track where that field came from cobweb of data ETL pipeline?
Here comes Databricks Lakehouse Monitoring to the rescue. Databricks Lakehouse Monitoring lets you monitor all of the tables in your account. You can also use it to track the performance of machine learning models and model-serving endpoints by monitoring inference tables created by the model's output.
Let's put this in perspective, Data Layer is a foundation of AI. When teams across data and AI portfolios work together in a single platform, productivity of ML Teams, Access to Data assets and Governance is much superior compared to siloed or point solution.
The Vision below essentially captures an ideal Data and Model Monitoring solution. The journey starts with raw data with Bronze -> Silver -> Golden layers. Moreover, Features are also treated as another table (That's refreshing and new paradigm, Goodbye feature stores). Now you get down to ML brass tacks by using Golden/Feature Tables for Model training and serve that model up.
Databricks recently launched in preview awesome Inference table feature. Imagine all your requests/responses captured as a table than raw files in your object store. Possibilities are limitless if the Table can scale. Once you have ground truth after the fact, just start logging it in Groundtruth Table. Since all this data is being ETLed using Databricks components, the Unity catalog offers nice end-2-end data lineage similar to Delta Live Tables.
Now you can turn on Monitors, and Databricks start computing metrics. Any Data Drift or Model Drift can be root caused to upstream ETL tables or source code. Imagine that you love other tools in the market for monitoring, then just have them crawl these tables and get your own insights.
Looks like Databricks want to take it up the notch by extending Expectations framework in DLT to extend to any Delta Table. Imagine the ability to set up column level constraints and instructing jobs to fail, rollback or default. So, it means problems can be pre-empted before they happen. Can't wait to see this evolution in the next few months.
To summarize, I came up with the following comparison between SageMaker and Databricks Model Monitoring.
Hope you enjoyed the quick read. Hope you can engage Propensity Labs for your next Machine Learning project no matter how hard the problem is, we have a solution. Keep monitoring.
Contact Us:
Propensity Labs
2300 Lakeview Parkway, Suite 700, Alpharetta, Georgia, 30009
About Us
We are a leading provider of cutting-edge data and AI professional services. Our team of experienced professionals is dedicated to helping businesses of all sizes harness the power of data to drive growth and innovation. We specialize in developing customized solutions that leverage the latest advancements in machine learning, deep learning, and natural language processing to help our clients achieve their goals.
This release was published on openPR.
Permanent link to this press release:
Copy
Please set a link in the press area of your homepage to this press release on openPR. openPR disclaims liability for any content contained in this release.
You can edit or delete your press release Tale of 2 Model Monitoring Solutions: Amazon SageMaker vs Databricks ML here
News-ID: 3201235 • Views: …
More Releases for Data
Data Catalog Market: Serving Data Consumers
Data Catalog Market size was valued at US$ 801.10 Mn. in 2022 and the total revenue is expected to grow at a CAGR of 23.2% from 2023 to 2029, reaching nearly US$ 3451.16 Mn.
Data Catalog Market Report Scope and Research Methodology
The Data Catalog Market is poised to reach a valuation of US$ 3451.16 million by 2029. A data catalog serves as an organized inventory of an organization's data assets, leveraging…
Big Data Security: Increasing Data Volume and Data Velocity
Big data security is a term used to describe the security of data that is too large or complex to be managed using traditional security methods. Big data security is a growing concern for organizations as the amount of data generated continues to increase. There are a number of challenges associated with securing big data, including the need to store and process data in a secure manner, the need to…
HOW TO TRANSFORM BIG DATA TO SMART DATA USING DATA ENGINEERING?
We are at the cross-roads of a universe that is composed of actors, entities and use-cases; along with the associated data relationships across zillions of business scenarios. Organizations must derive the most out of data, and modern AI platforms can help businesses in this direction. These help ideally turn Big Data into plug-and-play pieces of information that are being widely known as Smart Data.
Specialized components backed up by AI and…
Test Data Management (TDM) Market - test data profiling, test data planning, tes …
The report categorizes the global Test Data Management (TDM) market by top players/brands, region, type, end user, market status, competition landscape, market share, growth rate, future trends, market drivers, opportunities and challenges, sales channels and distributors.
This report studies the global market size of Test Data Management (TDM) in key regions like North America, Europe, Asia Pacific, Central & South America and Middle East & Africa, focuses on the consumption…
Data Prep Market Report 2018: Segmentation by Platform (Self-Service Data Prep, …
Global Data Prep market research report provides company profile for Alteryx, Inc. (U.S.), Informatica (U.S.), International Business Corporation (U.S.), TIBCO Software, Inc. (U.S.), Microsoft Corporation (U.S.), SAS Institute (U.S.), Datawatch Corporation (U.S.), Tableau Software, Inc. (U.S.) and Others.
This market study includes data about consumer perspective, comprehensive analysis, statistics, market share, company performances (Stocks), historical analysis 2012 to 2017, market forecast 2018 to 2025 in terms of volume, revenue, YOY…
Long Term Data Retention Solutions Market - The Increasing Demand For Big Data W …
Data retention is a technique to store the database of the organization for the future. An organization may retain data for several different reasons. One of the reasons is to act in accordance with state and federal regulations, i.e. information that may be considered old or irrelevant for internal use may need to be retained to comply with the laws of a particular jurisdiction or industry. Another reason is to…