Enterprise AI: Evolving Software Governance to Support New Intelligent Systems

    Martin Waller and Gerard Kerr


    AI Innovation

    To ensure enterprise software delivers value over its lifetime, enterprise IT has established rigorous governance frameworks. These frameworks cover integration, usability and training, steady state support etc, with defined ownership roles. But now, IT departments must manage the deployment of not just software, but intelligent AI systems, into their architecture.

    Well-designed Enterprise AI systems are wrapped in software that slots neatly into existing structures. But in operation, AI systems present very different challenges to software.

    To adapt to a world where companies benefit from these intelligent systems, software governance needs to evolve to include ‘AI governance’.

    What's the difference between software governance and AI governance?

    Probably the biggest difference between software governance and AI governance comes from change management.

    Software is updated proactively in response to user demands or wider IT changes but doesn't change on its own.

    Conversely, AI is a set of intelligent processes that can evolve, in ways that rules-based software can't. AI governance, therefore, needs to be both proactive (adding new data sources, updating the model) and reactive (responding to a change in the model).

    Often, complete human oversight of an AI system will be impossible or impractical. Instead, governance involves managing degrees of change, and knowing when to tweak or retrain the model. This requires implementing ways to monitor and assess key metrics. When these metrics fall outside of acceptable ranges, warnings or errors should be raised. Processes must be put in place to establish what kind of intervention is required when the model operations team receives these alerts.

    Some ongoing AI checks will be comparable to software checks, but have unique challenges, and require specialist understanding to perform. These include assessing the impact of changes to incoming data streams on the model, and versioning models and training data before and after changes.

    Models have many more metrics to monitor than software, including:

    • Precision
    • Accuracy
    • False-positive rate
    • Individual node performance
    • Actual run time vs predicted performance

    All of this makes change more complex and failure more likely.

    For a more detailed look at how and why models change over time, see our article on Steady State AI Support.

    What's included in an enterprise AI governance framework ?

    So, what should a framework for model deployment, monitoring, and change management include? There’s no one-size-fits-all, so an initial audit of existing supported assets is a sensible first step for determining how governance needs to evolve.

    In establishing your AI governance, we recommend paying particular attention to the following six points.

    1. Traceability, explainability, and interpretability. Ensure you have adequate mechanisms for investigating what models are doing and holding them to account for their decisions.
    2. Monitoring and status. Deploy automated systems to spot model drift (changes in predictive power due to a changing environment), outliers in input and inference data, or performance loss (infrastructure performance and latency degradation). Manual human checks should be performed in line with model risk, with more checks when the cost of model failure is high.
    3. Alerts and communication. Monitoring should be deployed alongside a robust alerts system, with alerts tied to thresholds. Alerts should be categorized depending on whether it's a warning to the Ops team to review a particular prediction or whether the inference data is detected to be erroneous, and intervention is needed. Decisions on which thresholds are acceptable must come from an understanding of both the model and business usage of the system.
    4. Model oversight. Assign a ‘model owner’, who has oversight of, and accountability for, the model. They will be responsible for routine checks, and a rigorous audit at least once per year to look under the hood of the model, revalidate, and check monitoring systems.
    5. Manage upgrades and modifications. Establish processes for changing models, covering classification of changes (e.g. 'performance', 'inputs' or 'intended use'); verification and validation approaches, required documentation, and any regulatory compliance processes.
    6. Security. Setup processes to monitor the model and flag unusual activity that looks different from normal use – such as attacks designed to show the hacker how to recreate the model and steal your IP.


    How to make your software governance ‘AI-ready’

    To deliver your new AI governance, there will need to be new processes, technologies, roles, and responsibilities.

    New roles will need to be created for Model Owners (MO) to check models and respond to users’ questions and concerns. They will report to a new Model Risk Management Group which will be responsible for oversight and defining acceptable risk levels across the organization’s portfolio of models, by looking at the severity of the impact of a wrong decision and the likelihood of it happening.

    The MO will need to identify what they need to monitor the model's inputs and outputs. This will likely include tools for data monitoring and explainability, and processes for working with expert users to evaluate outputs.

    A roadmap for evolving software governance to AI governance

    Finally, how does an organization evolve its software governance to a place where it can effectively manage intelligent systems? We recommend a 5-step approach:

    1. Vision and strategy. Start with a clear vision for what you want intelligent systems to deliver for the organization, which all other decisions can be checked against. A vision to increase sales will have different deliverables to a vision to reduce drug development times.
    2. Train and uplift skills. Identify roles within the organization that are required to support model delivery and operation. Identify the necessary training and recruitment to ensure there's adequate model understanding and capability within the enterprise.
    3. Upgrade architecture and tooling. Agree upon and roll out platforms for AI model management in operations, for monitoring, logging, auditing, versioning, and model inventory. Establish accountability procedures, standards, and policies around the usage of AI. Create documentation templates to standardize change management.
    4. Deployment, transitioning, and operationalization. Set up a standardized delivery framework for rolling out new AIs (e.g. our RAPIDE framework). This allows you to develop and deploy production-ready AIs that will work well within a production environment. Good governance should be involved during the design and development phases when decisions about tools, frameworks, and models will impact the ability to maintain it in production. Create communication and training for those using AI solutions and those whose data will be used in AI solutions.
    5. Monitoring, continual improvement. Move from stage-gated checks to continuous monitoring of models. Collect data for retraining and retesting. Commence regular meeting and model assessments. Outline and categorize the kinds of model changes that may be required, including their trigger points, and ensure processes are in places to manage these. Be ready to respond to trigger points for reactive intervention.

    Enterprise IT is a sophisticated operation that's become adept at deploying software into a complex technical and human environment. It's now presented with new challenges in managing the operation of enterprise AI systems. To address these, it will require new technologies, processes, skills, and ways of thinking.

    Tessella has worked in partnership with a wide range of pioneers of enterprise AI, who have gone through the process of learning to deal with new approaches to gain value from intelligent systems. We've benefitted from this shared learning to understand what's needed, what works, and where the pitfalls are.

    As enterprise AI and the need for AI governance moves into the mainstream, Tessella can deploy these skills and experience to help enterprises deliver maximum AI benefit to their organization.

    Data scientist