February 28, 2023


Declarative process mining is explainable AI

In today’s data-driven world, businesses have access to an unprecedented amount of information. Yet, the challenge remains how to make sense of this data and leverage it to drive better decision-making. Declarative process mining and explainable AI are two innovative approaches that can help address this challenge. However, many organizations are still not fully aware of their potential.

During the EcoKnow conference in August 2021, Tijs Slaats and Paul Cosma presented an insightful approach to process mining that caught my attention. As someone who has been working with process mining for a couple of years, I have seen the increasing use of AI as a solution that promises to provide intelligence and insights. However, the challenge with many AI recommendations is that they often cannot be easily explained.

According to Tijs Slaats, declarative process mining is, in fact, a form of explainable AI, xai . The process begins by analyzing a log and using process discovery to generate a declarative model known as a DCR graph. The model is made up of simple elements such as roles, activities, and rules, which can be inspected and understood by humans.

Of particular importance for explainability are the rules within the declarative models. Each rule can be inspected individually and accepted or rejected based on its accuracy and relevance. This approach enables people to review and verify the models generated by the computer, thereby ensuring that humans can understand the underlying logic.

The verification process for declarative models goes beyond classical verification, which focuses solely on verifying the result.

Instead, it verifies the logical reasoning behind the answers, making it easier for humans to understand the models and the insights they provide. This level of transparency and explainability is critical for ensuring that organizations can make informed decisions based on reliable data.

Explainable AI is crucial for several reasons. One of the most significant is that many AI solutions are susceptible to bias. For example, suppose we ask an AI system like ChatGPT about a police officer, who has two children, with Peter. In that case, the system might start talking about same-sex marriage and adoption, rather than concluding that the police officer is likely a woman. To address such biases, it’s essential to understand the rules and decision-making processes behind AI systems. Without explainability, it’s difficult to identify and correct bias, leading to potential harm or discrimination. For more examples of bias and how to avoid them, check out this article from


A note on process mining

Classical process mining aims to discover and analyze the sequence of activities that occur in a business process, typically represented by a process flowchart. This approach involves extracting event logs from various sources, such as IT systems or sensors, and using techniques such as process discovery, conformance checking, and performance analysis to identify bottlenecks and improve process efficiency.

In contrast, declarative process mining focuses on discovering patterns and rules that govern the behavior of a process. Rather than focusing on the specific sequence of activities, declarative process mining seeks to identify the constraints and conditions that define when a certain activity can or cannot occur, or must occur at a later stage, potentially within a certain deadline. This approach involves analyzing event logs to identify the various states and properties of the process, and using techniques such as process constraint discovery and constraint-based mining to uncover hidden patterns and dependencies.

One advantage of declarative process mining is that it can be applied to a wider range of processes, including those that are highly dynamic and complex, where the specific sequence of activities may not be well-defined or may vary widely from case to case. Additionally, because declarative process mining focuses on discovering patterns and rules rather than a specific flow, it can be more easily applied to processes that involve human decision-making, such as knowledge worker processes.

DCR Indicators – explainable AI for business users

DCR Indicators is a cutting-edge, data-driven process mining solution that leverages the power of AI to provide valuable insights to organizations. What sets DCR Indicators apart from other solutions is that its recommendations are not only data-driven but also highly explainable.

For example, DCR Indicators can identify the most common reasons for bad outcomes, such as cases that are not closed by case workers. What makes this approach unique is that it not only provides recommendations based on a trained model, but it also explains how it arrived at those recommendations in a clear and understandable way. This level of explainability is critical for organizations that need to make informed decisions based on reliable data. Moreover, the model can be reviewed by domain experts who can accept or reject its elements if they are not fully understood. Additionally, experts can adjust the model and add new elements, potentially based on new legal requirements, ensuring the model stays up-to-date and relevant.

We have had the opportunity to use DCR Indicators on various datasets, and we would like to share a few examples below to illustrate the benefits of this approach.

As a first example, consider the following image of a rule discovered by the process discovery algorithm:

Scroll or Click

The pattern is simple but powerful. Whenever an event is received from another system, such as an employee who is no longer sick, the case must be closed. This rule is easy to understand and explain, making it an ideal starting point for discussions with case workers and managers.

One of the key benefits of DCR Indicators is that the patterns it identifies are self-explanatory. This means that stakeholders can easily review and verify the patterns, ensuring that they are accurate and relevant to the organization's needs. This level of transparency and explainability is critical for organizations that need to make data-driven decisions with confidence.

Another example of how DCR Indicators can drive valuable insights is in the area of early follow-up for unemployed individuals in company training programs. The following image shows the rule that was discovered:

Scroll or Click

he rule may seem simple, but also here its impact is powerful. After enrolling in a company training program, following up within two weeks is critical to ensuring that individuals remain engaged and motivated. This approach also allows any issues to be addressed early on, preventing potential problems from snowballing into larger ones. The pattern is self-explanatory and easy to understand, making it an excellent starting point for further discussions with stakeholders. By leveraging DCR Indicators, organizations can gain valuable insights into their processes and make data-driven decisions that lead to better outcomes for all involved.

While the pattern may seem straightforward, it is also important to understand the underlying reasons why this approach is effective. During our project, we explored various questions to gain a deeper understanding of the pattern and its implications. This level of exploration and inquiry is critical for organizations that want to leverage data-driven insights to inform their decision-making.

DCR Indicators provides not only valuable recommendations based on data-driven insights but also the transparency and explainability required to build trust with stakeholders. These two examples demonstrate how DCR Indicators can help organizations uncover patterns in their data and gain deeper insights into their processes, ultimately leading to better outcomes for all involved.

Sometimes, we come across patterns that are easy to understand formally, but humans still struggle to grasp intuitively. While explainability is important, it's not always sufficient to make us fully comprehend a pattern. This intuitive sense of "making sense" is equally essential.

One such example is the pattern we discovered where we must ensure that courses and education programs are created before creating a company training program. This pattern is violated in 14 "bad" cases, even though it occurs in two good cases.

Scroll or Click

This empowers the domain expert to evaluate the process execution using real-life examples, allowing for a more comprehensive understanding and analysis of the process.

It's crucial to acknowledge that humans often question the "why" behind certain patterns, and this is something that we may not want to automate. Asking questions and being critical is an essential part of our work, and we should encourage it to continue to grow and learn.

Human adjustable explainable AI

Domain experts, who are the case workers helping citizens and customers, play a crucial role in verifying and adjusting the models, as they are the true experts in their respective domains. The models discovered through process mining should be designed to be explainable, human-verifiable, and adjustable by domain experts. By providing such models, domain experts can work with them as they are, without relying solely on data scientists. This not only improves the accuracy of the models but also ensures that they align with the actual work processes and legal requirements of the organization.

Scroll or Click

By presenting these models graphically, domain experts are empowered to verify and adjust them, which ultimately enables them to take ownership of their work. This is a crucial aspect of declarative process mining because it ensures that the experts who are closest to the work are able to provide valuable insights and make informed decisions. By allowing domain experts to work with a model that is explainable, human verifiable, and adjustable, they can more easily understand and make use of the AI recommendations in their daily work. Additionally, by providing a clear picture of the business processes and their associated rules, the experts can identify opportunities for improvement and optimize the processes to better serve their customers and citizens. This empowerment of domain experts is essential to ensure that AI-driven solutions are effective and impactful in real-world scenarios.

Limitations of declarative process mining

Declarative process mining is a valuable technique for learning control-flow models based on event logs, but it has limitations when it comes to decision-making. One of its main drawbacks is that it only considers the control-flow aspect of processes and does not take into account other crucial aspects, such as data or resource utilization. This can limit the scope of the analysis and make it difficult to capture the full complexity of real-world processes. To address these limitations, further research is needed to develop discovery algorithms that consider all aspects involved in decision-making, as well as to capture event logs with more information such as users, resources, and their utilization at the time of execution. By doing so, declarative process mining can serve as a useful foundation for decision support, enabling organizations to make more informed decisions based on a comprehensive understanding of their processes.


Declarative process mining offers a powerful approach to explainable AI that enables us to explain the elements of the models - roles, activities, and rules - in a way that makes sense to businesspeople and domain experts. This understandability and explainability is critical because it ensures that the AI recommendations are usable and actionable by those who will benefit from them most in their daily work. It is important that domain experts can easily inspect and understand the model, not just data scientists. After all, these are the individuals who will be using the recommendations generated by the AI. If they do not feel confident in the solution, they may be hesitant to use it, rendering the AI useless. Furthermore, we must ensure that the solutions generated by the AI do not violate any legal requirements or provide wrongful recommendations. Humans have an intuitive sense of what is right and wrong, and we should leverage this capability to provide meaningful, trustworthy recommendations. Ultimately, it is our goal to ensure that the recommendations generated by the AI are not only explainable but also align with the judgment of domain experts, ensuring the success and applicability of the solution.

Declarative process mining is a powerful tool that can help improve knowledge workers' productivity and efficiency. By providing explainable patterns that make sense to domain experts, we can encourage them to change their work behavior in a way that is fully compliant with legal requirements. In addition, regular meetings among employees, where they share experience and ask for recommendations from co-workers, can be enhanced by incorporating the indicators provided by the AI solution. This approach has already been successfully implemented in a Danish municipality, where the indicator of early follow-up for employees attending company training programs was adopted as a new work habit.

In order to further facilitate knowledge workers, IT systems should support dynamic and declarative workflows like #DCR graphs. By doing so, the burden on employees to remember all the details of laws and rules from professional sparring can be significantly reduced. This would allow them to focus on their tasks without having to worry about being compliant, leading to less stress and greater motivation. Ultimately, it is important to recognize the unique abilities of humans to sense intuitively what is right or wrong, and to leverage this capability in conjunction with the power of AI. By doing so, we can achieve greater efficiency, productivity, and compliance in the workplace.