Towards Intelligent Automation

Intelligent Automation is currently one the hottest topics in Robotic Process Automation’s (RPA), but there’s little to no information on how to implement it in a practical manner.

If you’re a part of an RPA team, you’re likely wondering how you can get started with integrating Artificial Intelligence (AI) and Machine Learning (ML) into your RPA platform. Having worked in ML and RPA fields for over five years, I’ve developed some ideas to help RPA teams structure their thinking around Intelligent Automation delivery.

Note that this article is actively undergoing revision as I get better ideas on how to achieve Intelligent Automation.

Types of AI in RPA

When discussing the combination of AI and RPA, there’s really three completely separate topics:

  1. Adding AI to your automated Business Processes
  2. Using RPA to automate the building and tuning of AI models and software
  3. Using AI to improve the RPA system itself

Most don’t make the distinction, but when referring to Intelligent Automation, people typically mean point 1. Point 1 is what I’ll be addressing in this article.

Prerequisites

Before getting started, there’s criteria that your organization should first meet. 

RDA vs. RPA

I believe that Intelligent Automation will scale poorly in Robotic Desktop Automation (RDA) scenarios. Having said this, RDA practitioners can still apply the advice given in this article on a process by process basis.

Currently, the use of ML/AI needs planning, knowledge and testing to use correctly inside processes. It’s possible to teach staff how to use a single RDA design tool so that they can automate their own processes. They already understand their processes and know how to use the Applications that are involved. However, it’s unrealistic to expect these same users to learn how to use a wide variety of ML algorithms correctly and ethically.

Most Machine Learning models operate as “black boxes”. There’s no way to understand or to justify the output that comes out from one of these models. A situation where there are dozens of different machine learning models being used by “Digital Assistants” across an RDA platform would be impossible to audit and manage

On the other hand, for a centrally controlled, enterprise grade RPA platform, we can risk manage and audit which ML algorithms are being used and subject their use to change control when those models get updated.

RPA Maturity Level

I would avoid adding Intelligent Automation if you are just starting your automation journey.  Transforming your organization to embrace RPA is already a complicated endeavour. More specifically, there should be a functioning RPA governance structure in place that will plan, define standards, and control the rollout of AI using RPA to the organization. “Intelligence” can be added when RPA is stable and mature.

AI Governance

A stable foundation for Intelligent Automation rollout will include the integration of AI concerns into the RPA governance system. This includes:

  • Following and integrating the AI strategy of the organization
  • Planning the development of AI expertise within the RPA team
  • Developing guidelines on how to test, score and select an appropriate ML solution
  • Pre-approving AI vendors and algorithms for use
  • Defining boundaries for the ethical use of AI
  • Definition of artefacts to produce and standards to follow when incorporating AI into a process
  • Approving and reviewing the use of AI
  • Maintaining a ML Register to document all uses of AI inside your processes

If there is no AI/ML expertise on the current RPA team, there needs to be a plan to borrow expertise from elsewhere in the organization to help set up AI usage guidelines for the RPA function.  I think that there will eventually be a small working group or “AI design committee” within the RPA team.  This team can provide expertise to Business Users when they want guidance on how to add AI into their processes.

My recommendation would be to embark on your Intelligent Automation journey when you are at least at a medium level of RPA maturity and have a centrally controlled RPA platform.

Getting Started with Intelligent Automation

Problem Statement Definition

We’re now in the age of Narrow AI (as opposed to General AI), meaning that AI can only be used to solve specific, targeted problems. The first step is to formalize the specific problem that we intend to solve using AI. This statement includes:

  • A description of what you are trying to solve with machine learning
  • What type of Machine Learning problem it is:
    • Supervised Learning – providing labelled data with our desired outcomes
    • Unsupervised Learning – finding patterns in data without human direction
    • Reinforcement Learning – a goal-oriented algorithm that tries to maximize a reward through rewarding through planning a sequence of steps or actions to take
  • What inputs can be potentially sent into the AI algorithm:
    • Numerical
      • Discrete numbers (Integers)
      • Continuous numbers (Real numbers)
    • Categorical
    • Ordinal
      • Mix of numerical and categorical, e.g. 5 star ratings
    • Images
    • Words or Sentences
    • Documents
    • Other
  • What output(s) you expect for use in the Business Process. This could be one or a combination of many things, including:
    • Confidence intervals or scores
    • Text labels (classification)
    • Binary (yes/no)
    • Numerical values, and in what range (e.g. 0 to 1)
    • Text (OCR, or summarizing phrases),
    • Image coordinates
    • Others
  • How the outputs of the AI system will be used
    • To make a decision?
    • To perform a calculation?
    • To generate new data?
    • Other
  • The risks of using the outputs of the AI system in the process
  • Shortlisted ML algorithms or vendor services
  • Restrictions, such as cost, or inability to connect to the Internet
  • SLAs or speed related performance requirements
  • Model performance requirements such as accuracy
  • How this ties back to the business requirements and process

The inputs and outputs that you have defined will narrow down the list of possible Machine Learning algorithms that can be used in your Business Process.

Inside the RPA documentation, we need to list out what alternatives we have if no suitable AI solution can be found or developed.  Will the automation be abandoned? Converted into human in the loop? Will we replace AI with rules based processing instead, or do something else?

AI Risks

Careful thought must be given as to how the use of AI will affect the Business Process. For example, if we are using it AI to perform a medical diagnosis, the severity of AI getting the result right or wrong will depend on what you do with the result after.

If the use of AI could result in undesired medical side effects, death, or cause huge financial burden, AI may not be suitable for use this process.

Narrow Down AI Solution Choices

Once you have your problem statement, you can start choosing an AI solution.  This is a huge topic on its own and a bit outside the scope of this discussion.  Microsoft Azure provides a bit of guidance to help you get started on choosing an algorithm:

The list of potential algorithms that you can use will also be affected by guidelines provided by AI governance and the Problem Statement.

Build vs. Buy

There are many commercial off the shelf (OTS) Machine Learning solutions available on the market today. Most require Internet connectivity and are billed by the number of requests, which means that your RPA platform will likely need to able to access the web. If no Internet is available, then you may need to develop your own Machine Learning model.

OTS Pros and Cons

The benefits of OTS AI include:

  • Fast integration through APIs
  • No need for training your own model
  • You can get started with less knowledge of AI

Cons include:

  • Your problem must be generic enough to have been solved by someone else
  • You might not be able to improve the model yourself as it is controlled by the vendor
  • The vendor may update the model without you knowing about it, thus potentially changing your results for the worse

Half the battle is knowing what AI already exists out there. If your use case falls into one of these categories, you may be a good candidate for using OTS AI:

  • Image and video moderation for adult content
  • Identifying faces in photos
  • Facial analysis, for determining age, gender
  • Identifying celebrities and landmarks in photos
  • Reading text from images and video
  • Transcribing text from audio
  • Translating text from one language to another
  • Categorizing text and extracting names, locations, and dates
  • OCR (computer generated text) and ICR (hand written text)
Custom Build Pros and Cons

The pros of custom building a your own ML model include:

  • You can build a model to solve your task specifically
  • You can improve the model over time
  • You control when new versions of the model are rolled out

And the cons:

  • You need substantial AI know-how to get started
  • You may need additional infrastructure provisioned to train your models

Evaluate Algorithms

Once you’ve chosen some OTS ML services, or built your own models, you’ll need to evaluate them outside of the RPA environment. We have two goals in testing the shortlisted algorithms outside of the RPA platform. We want to assess:

  1. The speed performance; and
  2. The model performance (accuracy)

Realistic inputs should be used for testing.

There are a number of ways to evaluate ML models, and the appropriate method really depends on the requirements of the Business Process.

Create the Confusion Matrices

Apologies for the math in this section, but it really is needed.

A confusion matrix is a way to visually see the test results of a model.

A Binary Confusion Matrix

Imagine that our problem definition is to classify a person as having good credit or bad credit for a bank loan Business Process.

True Positives (TP) are when the model correctly predicts something to be True, for example, that James has good credit when he actually has good credit.

True Negatives (TN) are when the model correctly predicts that something is not True, for example, that James has bad credit when he actually has bad credit.

False Positives (FP) are when the model incorrectly predicts that something is True when it is actually False. This would be the case if the model predicted that I have good credit, when I actually have bad credit.

False Negatives (FN) are when the model incorrectly predicts that something is False when it is really True. This is when the model predicts that I have bad credit, when i actually have good credit.

Confusion Matrices don’t need to be 2×2 in size. If there are more predictions being made, they can be bigger. For example, if we are trying to predict what object is inside an image, our matrix may look like:

A 10x10 Confusion Matrix
A 10×10 Confusion Matrix

Confusion Matrices are a way to compare the results generated from different Machine Learning models.

Sensitivity vs Specificity

Sensitivity is the True Positive Rate and is calculated using the formula:

Sensitivity = TP/(TP+FN)

A Sensitivity value that is closer to 1 is better, as it indicates that there are less False Negatives.

Specificity is the True Negative Rate and is calculated using the formula:

Specificity = TN/(TN+FP)

Once again a Specificity value that is closer to 1 is better, as there are fewer False Positives.

The question that we want to ask is:

For our Business Process, do we want high sensitivity or high specificity?  What is the correct balance between the two?

Are False Positives more acceptable in your business case? For example, in an Anti-money Laundering process, is marking a non fraud case as fraud OK as long as we can catch all fraud? If so, then we should lean towards having high sensitivity.  For medical diagnosis, making a False Positive predictions may be OK, as long as there are follow up procedures to confirm the diagnosis.

An example of a high specificity case can be when we need to classify video content so that minors don’t accidentally view adult content.  In this case, False Positives are unacceptable.  False Negatives, are allowed, i.e. we accept that some videos are incorrectly marked as adult content, when they really aren’t.

  • Sensitivity = When I cast a net into a lake, I want to catch every single fish. I don’t care if I catch frogs, bugs, plants etc as long as I get all of the fish
  • Specificity = When I cast a net into a lake, I only want to catch fish. I don’t want my net to have other things in it. I’m however fine with only catching some and not all of the fish in the lake

The sensitivity and specificity of the chosen ML algorithm needs to be based on the requirements of the Business Process, and the risks of making an incorrect prediction.

Models with too High Accuracy

It’s not necessarily good to have a perfectly accurate model. This often means that a model is overfit, with very little predictive ability.

Unsorted Ideas

Define uncertainty cutoff at each point where a ML result is used to make a decision and document it.

Log all ML outputs for audit purposes. Create an ML specific report with the parameters, inputs (if possible) and the success/fail result

Define a sample period and sample % of records to manually verify

Select a sample of Successful Completed (in RPA terms)

Which testing method? n-fold cross validation? Is this too technical?

Dealing with Uncertainty

This is one of the biggest hurdles to adopting Machine Learning in your processes. Until now, most RPA processes have been built with predictable decision paths. If a certain input is given, we can expect the output to be the same 100% of the time.

However, with AI, we are letting a “black box” model make decisions for us. The model isn’t something that we can understand or control. If something goes wrong, we can’t justify why a decision was made in the process. That’s why they return a confidence interval, normally between 0 and 1, where 0 is not confident and 1 is fully confident. The model itself can also be updated.

Ways of dealing with uncertainty when ML output is used to make a decision

  • Simple cutoff threshold
  • Voting with multiple algorithms
  • Deferring or delay the process, while waiting for human verification

Dealing with uncertainty for success cases: Sampling. How much should we sample for manual verification of success cases? Depends on confusion matrix

Defining your Confidence Interval Cutoff

Build a confusion matrix based on the decision that was made.

The model was trained on someone else’s data, so figure out the thresholds that work for you. Maybe your data works great with their model and you can use 50%. Maybe your data works poorly with the model and you need 90% confidence. Maybe you need to use a different vendor or custom build a model instead.

This cutoff is for this particular model. If the model is updated then recalibrate your cutoff. If there are multiple models, then this needs to be repeated for each model

Weigh your False positive False negative, true positive, true negative

Can no longer assume that your finished work is correctly finished, finished != correct

manual cases = examine model output, then human override the machine learning model’s result and continue. Use human variables, a collection to override machine learning using a startup parameter collection? use statuses to remember work

Have a test suite for just your machine learning portions

Store your API version of the model for audit purposes

Review the release notes and rebase your cutoffs: https://docs.microsoft.com/en-in/azure/cognitive-services/face/releasenotes#release-changes-in-june-2019

Boosting, voting,

Add TP FN, etc. as tables inside your documentation

thresholds as process specific environment variables that differ PER PROCESS

Add thresholds as log for reporting,

Retrain your model using outputs from your process.

Sample finished work to verify at the beginning of hypercare

delivery timelines of rpa vs of ml, they need to be separate projects

Your First Intelligent Automation Process

How should you choose your first AI enabled process?

The main approaches to this include:

  1. Automating a brand new process
  2. Extending an existing automated process by adding a Machine Learning to either the beginning or the end
  3. Replacing human input to an existing process (for a Human in the Loop process)
  4. Connecting two processes together that were once separate (this is kind of a sub case of points 2 and 3)

My recommendation would be to add ML to the start or middle of a process, since you know what the end result is already, so you can compare.

New process, vs. extending the start or end of an existing process (baby steps, gives you a testable and repeatable outcome). extending is a good first approach because you can compare results from a KNOWN state and tune your thresholds

For a new process, if the ML algorithm is not good enough, then it is makes no sense to even automate it.

What makes it simpler

  • Off the shelf AI
  • Extending an existing process?? Maybe not…