Nabugu - stock.adobe.com

Tip

Understand key MLOps governance strategies

Machine learning developers can speed up production of ML applications -- while avoiding risks to their organizations -- with an MLOps governance framework.

Governance strategies are imperative in machine learning to ensure its responsible use.

ML platform providers are also building governance frameworks, typically as part of their efforts to develop mature ML operations (MLOps) lifecycles. MLOps is the application of DevOps principles and techniques to deliver constantly improving ML-powered applications into production.

To achieve a mature lifecycle, a company must define the following:

  • The data cycle, where data quality is ensured.
  • The model cycle, where ML models are trained on that data.
  • Processes linking the two cycles to each other.
  • Processes linking the data and model cycles to the final application development cycle.

Companies should also define governance guidelines and criteria to help guide their MLOps lifecycles to completion. These are necessary to understand and prevent the potential risks of an ungoverned lifecycle.

MLOps must govern data, models and other processes

ML development teams can implement governance strategies within each stage of the overall lifecycle. Areas for data cycle governance can include data sources and data sets.

Data sources need to meet certain criteria before the company allows their use. These may include how the source gathered the data -- for instance, legally and ethically -- and which licensing terms are acceptable, if any apply.

Data sets need to meet certain criteria before they are added as training sets. These may include standards for what must be present, such as provenance indicating where, how and when the data was collected, and what may not be present, such as personally identifiable information (PII). Criteria should also include standards for overall data quality.

Areas for model cycle governance can include testing and data use by the team working on the application.

A model must pass testing before it can be integrated in an application. This process may include not just tests for the correctness or precision of answers, but also for things like biased outputs.

Human workers should be told what is allowed in both data inputs and model outputs and what is expressly forbidden. For example, PII is often forbidden in both.

Throughout all cycles, policy should dictate where and how version control and documentation are used. This includes inputs and outputs of each lifecycle stage, how the team acquires and prepares data sets for ML models, how ML models operate before and after each training run, and how ML-powered applications operate before and after these models get incorporated.

MLOps governance is not a one-off

To implement all these processes correctly and sustainably over time, an IT team needs to formally define a framework as a template. Each new ML development project should include the templated processes and resources. Every staff member involved should be familiar at a high level with the entire MLOps lifecycle and this governance framework.

There are multiple ways to achieve such a framework. Organizations can do any of the following:

  • Develop these policies and processes from scratch.
  • Copy governance models implemented in other places.
  • Purchase software that embodies governance -- e.g., for pre-made workflows -- and tailor to suit.

Ungoverned ML development brings big risks

Lack of governance in MLOps raises risks, primarily in the form of functional problems where models don't do what they ultimately should be doing. Also, there could be reputational risks, such as loss of trust, as well as legal sanctions related to improper use of data, use of tainted data and implementation of unacceptably biased applications.

Functional problems include the following:

  • Flaws in data sets, potentially including intentionally "poisoned" data, become less likely to spot.
  • The effects of bad data become less traceable and reversible.
  • A model ultimately produces incorrect or biased results, and tracing and fixing the problem become harder.

So, functional problems make the whole process of developing functional, legally compliant software slower and less certain.

If an ML application's outputs lead to legal action against the company that developed it, the business could be prosecuted or sued for using biased outputs illegally or using protected personal information improperly. In any such case, an audit of its development may be required. Similarly, an audit may be required to demonstrate compliance with applicable laws or with company policy. The audit trail available is only as good as the documentation for all activities involved. Without solid governance, that trail would likely be incomplete and inconsistent.

In the end, a well-constructed and well-trained MLOps lifecycle can help organizations implement and embody good governance practices, with the goal of speeding up production of functional, compliant software.

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close