blog
February 28, 2023
Declarative process mining is explainable AI
In today’s data-driven world, businesses have access to an unprecedented amount of information. Yet, the challenge remains how to make sense of this data and leverage it to drive better decision-making. Declarative process mining and explainable AI are two innovative approaches that can help address this challenge. However, many organizations are still not fully aware of their potential.
During the EcoKnow conference in August 2021, Tijs Slaats and Paul Cosma presented an insightful approach to process mining that caught my attention. As someone who has been working with process mining for a couple of years, I have seen the increasing use of AI as a solution that promises to provide intelligence and insights. However, the challenge with many AI recommendations is that they often cannot be easily explained.
According to Tijs Slaats, declarative process mining is, in fact, a form of explainable AI, xai . The process begins by analyzing a log and using process discovery to generate a declarative model known as a DCR graph. The model is made up of simple elements such as roles, activities, and rules, which can be inspected and understood by humans.
Of particular importance for explainability are the rules within the declarative models. Each rule can be inspected individually and accepted or rejected based on its accuracy and relevance. This approach enables people to review and verify the models generated by the computer, thereby ensuring that humans can understand the underlying logic.
The verification process for declarative models goes beyond classical verification, which focuses solely on verifying the result.
Instead, it verifies the logical reasoning behind the answers, making it easier for humans to understand the models and the insights they provide. This level of transparency and explainability is critical for ensuring that organizations can make informed decisions based on reliable data.
Explainable AI is crucial for several reasons. One of the most significant is that many AI solutions are susceptible to bias. For example, suppose we ask an AI system like ChatGPT about a police officer, who has two children, with Peter. In that case, the system might start talking about same-sex marriage and adoption, rather than concluding that the police officer is likely a woman. To address such biases, it’s essential to understand the rules and decision-making processes behind AI systems. Without explainability, it’s difficult to identify and correct bias, leading to potential harm or discrimination. For more examples of bias and how to avoid them, check out this article from Levity.ai.
A note on process mining
Classical process mining aims to discover and analyze the sequence of activities that occur in a business process, typically represented by a process flowchart. This approach involves extracting event logs from various sources, such as IT systems or sensors, and using techniques such as process discovery, conformance checking, and performance analysis to identify bottlenecks and improve process efficiency.
In contrast, declarative process mining focuses on discovering patterns and rules that govern the behavior of a process. Rather than focusing on the specific sequence of activities, declarative process mining seeks to identify the constraints and conditions that define when a certain activity can or cannot occur, or must occur at a later stage, potentially within a certain deadline. This approach involves analyzing event logs to identify the various states and properties of the process, and using techniques such as process constraint discovery and constraint-based mining to uncover hidden patterns and dependencies.
One advantage of declarative process mining is that it can be applied to a wider range of processes, including those that are highly dynamic and complex, where the specific sequence of activities may not be well-defined or may vary widely from case to case. Additionally, because declarative process mining focuses on discovering patterns and rules rather than a specific flow, it can be more easily applied to processes that involve human decision-making, such as knowledge worker processes.
DCR Indicators – explainable AI for business users
DCR Indicators is a cutting-edge, data-driven process mining solution that leverages the power of AI to provide valuable insights to organizations. What sets DCR Indicators apart from other solutions is that its recommendations are not only data-driven but also highly explainable.
For example, DCR Indicators can identify the most common reasons for bad outcomes, such as cases that are not closed by case workers. What makes this approach unique is that it not only provides recommendations based on a trained model, but it also explains how it arrived at those recommendations in a clear and understandable way. This level of explainability is critical for organizations that need to make informed decisions based on reliable data. Moreover, the model can be reviewed by domain experts who can accept or reject its elements if they are not fully understood. Additionally, experts can adjust the model and add new elements, potentially based on new legal requirements, ensuring the model stays up-to-date and relevant.
We have had the opportunity to use DCR Indicators on various datasets, and we would like to share a few examples below to illustrate the benefits of this approach.
As a first example, consider the following image of a rule discovered by the process discovery algorithm:
The pattern is simple but powerful. Whenever an event is received from another system, such as an employee who is no longer sick, the case must be closed. This rule is easy to understand and explain, making it an ideal starting point for discussions with case workers and managers.
One of the key benefits of DCR Indicators is that the patterns it identifies are self-explanatory. This means that stakeholders can easily review and verify the patterns, ensuring that they are accurate and relevant to the organization's needs. This level of transparency and explainability is critical for organizations that need to make data-driven decisions with confidence.
Another example of how DCR Indicators can drive valuable insights is in the area of early follow-up for unemployed individuals in company training programs. The following image shows the rule that was discovered:
he rule may seem simple, but also here its impact is powerful. After enrolling in a company training program, following up within two weeks is critical to ensuring that individuals remain engaged and motivated. This approach also allows any issues to be addressed early on, preventing potential problems from snowballing into larger ones. The pattern is self-explanatory and easy to understand, making it an excellent starting point for further discussions with stakeholders. By leveraging DCR Indicators, organizations can gain valuable insights into their processes and make data-driven decisions that lead to better outcomes for all involved.
While the pattern may seem straightforward, it is also important to understand the underlying reasons why this approach is effective. During our project, we explored various questions to gain a deeper understanding of the pattern and its implications. This level of exploration and inquiry is critical for organizations that want to leverage data-driven insights to inform their decision-making.
DCR Indicators provides not only valuable recommendations based on data-driven insights but also the transparency and explainability required to build trust with stakeholders. These two examples demonstrate how DCR Indicators can help organizations uncover patterns in their data and gain deeper insights into their processes, ultimately leading to better outcomes for all involved.
Sometimes, we come across patterns that are easy to understand formally, but humans still struggle to grasp intuitively. While explainability is important, it's not always sufficient to make us fully comprehend a pattern. This intuitive sense of "making sense" is equally essential.
One such example is the pattern we discovered where we must ensure that courses and education programs are created before creating a company training program. This pattern is violated in 14 "bad" cases, even though it occurs in two good cases.