Our mission is to create and spread knowledge on how IT creates value for industry and society. Moreover, we act as a catalyst for realizing IT’s potential. Herefore, we are actively conducting research in four key areas: Explainable AI, Process Analytics in Healthcare, Digital Auditing and Behavioral Analytics.
2019 - 2023
An auditor, who assures company stakeholders that the company's financial numbers represent reality in a fair way, deals daily with analyzing information. Before the digital era, this information was sparse and paper-based and this low-information environment has been the setting of different international guidelines on how to conduct an audit. Nowadays, more information is stored electronically and can be analyzed by means of data analysis. The possible advantages that this new approach holds for the auditing profession have been enumerated in different articles. However, a lot of research that integrates all different aspects, being both auditing concerns and data analyses concerns, still has to be conducted. This research aims to first present a full overview of all different audit phases that could benefit from data analysis, along with a concrete proposal of analysis techniques that could be used. In order not to stay hypothetical, both the real input data that is generally present during audit engagements and the technicalities of the analysis techniques are examined for potential combinations. In the following phases of the project, specific auditing tasks of identifying deviations and of how to respond to errors are investigated from a process mining and data mining point of view. The goal is to leverage the full-population testing opportunity for auditors, while also supporting the auditor who deals with the incremented number of potential flags to investigate.
Hospitals are becoming increasingly aware of the need to improve their business processes to tackle challenges such as tightening budgets and an ageing population. To focus process improvement initiatives, challenging questions such as "why is the flow time so high for a group of patients?" need to be answered. Root-cause analysis can be used to answer such questions as it aims to find explanations for problems, e.g. related to flow time. To find these explanations, data-driven root-cause analysis is promising as it enables studying a large number of patients using readily available data. This data originates from the hospital information system, which automatically records process execution information in event logs. Unfortunately, current state-of-the-art on data-driven root-cause analysis fails to reach its full potential because it suffers from two fundamental limitations: (1) the presence of data quality issues in real-life event logs and (2) hiatuses of existing approaches to perform data-driven root-cause analysis. The proposed research aims to tackle both limitations by (1) introducing a methodology to improve existing event logs using indoor location data and (2) introducing an enhanced methodology to support data-driven root-cause analysis. However, this requires overcoming several research challenges, which will lead to innovative results and fundamental contributions to literature.
2017 - 2021
The research discipline on process mining is a relatively young discipline, with its origin at the end of the last century. The new-born discipline was picked up by Wil van der Aalst (Eindhoven University of Technology) in 2004 and further elaborated to the broad research domain of process mining. The idea of process mining is to discover, monitor, and improve real processes (as opposed to assumed processes) by extracting knowledge from event logs in today's information systems, as declared in the 'Process Mining Manifesto'. By connecting data mining and business process modeling, process mining has the potential to fundamentally increase the knowledge on business processes, based on real process behavior. To date, an active group of researchers in the Business Process Management community works on this topic, both fundamental and in a broad range of application fields. Also, the industry is showing interest in the research, as evidenced by multiple reported case studies.
2017 - 2020
Since their inception in the late '80s by Bart Kosko, Fuzzy Cognitive Maps (FCMs) have been widely accepted by the scientific community. However, the most interesting pieces of the research reported in the literature are rather applicative with a few papers devoted to elaborating the FCM foundations. Recently, researchers from Hasselt University proposed a new approach named Short-term Cognitive Networks (STCNs) as an alternative to classic FCMs, which allows performing simulations on the basis of previously defined expert knowledge, where weights may have a causal meaning or not. The accuracy and transparency of this model encouraged us to investigate a new approach that allows the STCN-based model to handle symbolic situations since real-world problems are often described with imprecise information that is difficult to evaluate objectively. The proposal focuses on four main challenges: 1) how to handle symbolic information attached to the concepts' activation values and the relations between them, 2) how to unify the information when it comes from different experts, 3) how to estimate the weight set from data when experts are not available and 4) how to characterize the inference process for a problem instance. Towards the end, we can obtain a symbolic neural system that is more transparent to human experts, which can be used in a wide variety of application problems.
2017 - 2020
A community is a group of vertices that share common properties and/or similar roles within the graph. Community Detection is a process of discovering well-defined communities based on the principle that there are more edges inside a community than edges connecting the rest of the graph. This problem is quite challenging and remains an active research field, recently expanded to Multiplex Networks, which incorporate several channels of connectivity in a system, and provide a natural description for systems in which entities have a different set of neighbors at each layer. This proposal attempts to develop new models to detect communities in Multiplex Networks, thus addressing the key shortcomings of existing models. The proposal comprises two research directions: 1) to improve the Girvan-Newman algorithm by handling the issues with the betweenness measure and next proposing an aggregation operator capable of reducing the information loss, and 2) to propose a new algorithm based on tensor algebra and rough sets to analyze the multiplex network as a single n-dimensional space. In both cases, we will consider entities having different natures, particularly those represented assets. It should be commented that the proposed algorithms must be able to cover both monoplex and multiplex CD problems, as such approaches suffer from the same shortcomings. The theoretical results of this research will be applied to a real-world problem related to email marketing.
2017 - 2020
Rough Cognitive Networks (RCNs) are recently introduced classification model that allows elucidating their decision process. This transparent classifier augments the neural reasoning scheme of Fuzzy Cognitive Maps with information granules coming from the Rough Set Theory. RCN-based classifiers have proven effective in solving a wide variety of standard classification problems. The accuracy and transparency of RCN-based models encouraged us to investigate their performance in Multi-Label Classification (MLC) scenarios, which have arisen as an extension of standard classification problems in which each input object is associated with multiple labels. The RCN model involves three key steps, namely (1) the granulation of the example space, (2) the network design, and (3) the network exploitation. In order to adapt the RCN model to the MLC context, these steps must be modified. The envisaged research will result in an RCN-based algorithm able to solve different types of MLC problems, being both theoretically sound and transparent, yielding high prediction rates and computationally more efficient than the existing MLC procedures.