Integrating Explainable AI into Model-Driven and Low-Code Enterprise Applications

Authors

  • Dominik Banka Department of Data Science and Data Engineering, Eötvös Loránd University Pázmány Péter sétány 1/C, 1117 Budapest, Hungary
  • Tamas Orosz Department of Data Science and Data Engineering, Eötvös Loránd University Pázmány Péter sétány 1/C, 1117 Budapest, Hungary https://orcid.org/0000-0003-0595-6522
  • Attila Ritzl SAP Hungary Kft., Záhony u. 7., 1031 Budapest, Hungary https://orcid.org/0009-0007-2742-4881

DOI:

https://doi.org/10.14513/actatechjaur.00929

Keywords:

Explainability, Low-code, Model-driven, Integration, Transparency, Enterprise applications

Abstract

Explainable artificial intelligence has become increasingly important in enterprise settings, as organisations require transparent and trustworthy decision-support tools. At the same time, low-code and model-driven platforms are widely adopted for building business applications, yet their high level of abstraction often hides the reasoning behind automated recommendations. This study examines how explainability can be systematically incorporated into such environments by introducing a modular approach that separates predictive functions from the generation of human-interpretable explanations. The proposed concept builds on an external reasoning layer that provides both predictive outputs and concise, user-oriented justifications through a unified interface, allowing enterprise systems to present explanations without modifying existing development workflows. To demonstrate the feasibility of the approach, the study applies it to a representative enterprise scenario involving personalised recommendations. The proof-of-concept implementation shows that explanations can be delivered in real time and integrated seamlessly into standard business user interfaces. The results highlight that the proposed solution can enhance transparency, support user trust, and increase the adoption of data-driven features in low-code and model-driven applications. The study contributes a practical architectural pattern that can serve as a foundation for future explainable enterprise systems and provides initial evidence that explanation services can operate effectively alongside contemporary development paradigms.

Downloads

Download data is not yet available.

References

S. Thiebes, S. Lins, and A. Sunyaev, “Trustworthy artificial intelligence,” Electron Markets, vol. 31, no. 2, pp. 447–464, June 2021. https://doi.org/10.1007/s12525-020-00441-4

M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier,” presented at the in Proc. 22nd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining (KDD), San Francisco, CA, USA: ACM, 2016, pp. 1135–1144. https://doi.org/10.1145/2939672.2939778

S. Lundberg and S.-I. Lee, “A Unified Approach to Interpreting Model Predictions,” presented at the Proc. 30th Conf. Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA: Curran Associates, Inc., 2017, pp. 4765–4774. [Online]. Available: https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html

W. Samek, G. Montavon, S. Lapuschkin, C. J. Anders, and K.-R. Muller, “Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications,” Proc. IEEE, vol. 109, no. 3, pp. 247–278, Mar. 2021. https://doi.org/10.1109/JPROC.2021.3060483

M. Nauta et al., “From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI,” ACM Comput. Surv., vol. 55, no. 13s, pp. 1–42, Dec. 2023. https://doi.org/10.1145/3583558

L. Arora, S. S. Girija, S. Kapoor, A. Raj, D. Pradhan, and A. Shetgaonkar, “Explainable Artificial Intelligence Techniques for Software Development Lifecycle: A Phase-specific Survey,” arXiv.org. Accessed: Nov. 27, 2025.

F. Haag, K. Hopf, P. M. Vasconcelos, and T. Staake, “Augmented cross-selling through explainable AI -- a case from energy retailing,” Aug. 24, 2022, arXiv: arXiv:2208.11404. https://doi.org/10.48550/arXiv.2208.11404

P. Pokala, “Artificial Intelligence In Enterprise Resource Planning: A Systematic Review Of Innovations, Applications, And Future Directions,” SSRN Electron. J., 2025. https://doi.org/10.5281/zenodo.14170247

A. Barredo Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, et al., “Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information Fusion, vol. 58, pp. 82–115, 2020. https://doi.org/10.1016/j.inffus.2019.12.012

D. E. Mathew, D. U. Ebem, A. C. Ikegwu, P. E. Ukeoma, and N. F. Dibiaezue, “Recent Emerging Techniques in Explainable Artificial Intelligence to Enhance the Interpretable and Understanding of AI Models for Human,” Neural Process Lett, vol. 57, no. 1, p. 16, Feb. 2025. https://doi.org/10.1007/s11063-025-11732-2

[V. Arya, R. K. E. Bellamy, P.-Y. Chen, A. Dhurandhar, M. Hind, S. C. Hoffman, et al., “AI explainability 360: An extensible toolkit for understanding data and machine learning models,” Journal of Machine Learning Research., vol. 21, no. 130, pp. 1–6, 2020. https://www.jmlr.org/papers/v21/19-1035.html

R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM Comput. Surv., vol. 51, no. 5, p. 93:1-93:42, 2018. https://doi.org/10.1145/3236009

C. Meske, B. Abedin, M. Klier, and F. Rabhi, “Explainable and responsible artificial intelligence: A research agenda for understanding the socio-technical implications of AI,” Electron. Markets, vol. 32, no. 4, pp. 2103–2106, 2022. https://doi.org/10.1007/s12525-022-00607-2

M. Kandaurova, D. Skog, and P. M. Bosch-Sijtsema, “The Promise and Perils of Low-Code AI Platforms,” MIS Q. Exec., vol. 23, no. 3, pp. 275–289, 2024. https://doi.org/10.17705/2msqe.00098

V. Belle and I. Papantonis, “Principles and Practice of Explainable Machine Learning,” Front. Big Data, vol. 4, July 2021. https://doi.org/10.3389/fdata.2021.688969

M. Arnold, R. K. E. Bellamy, M. Hind, S. Houde, S. Mehta, A. Mojsilovic, et al., “FactSheets: Increasing Trust in AI Services through Supplier’s Declarations of Conformity,” 2018, arXiv: arXiv:1808.07261. https://doi.org/10.48550/arXiv.1808.07261

A. Adadi and M. Berrada, “Peeking inside the black-box: A survey on explainable artificial intelligence (XAI),” IEEE Access, vol. 6, pp. 52138-52160, 2018. https://doi.org/10.1109/ACCESS.2018.2870052

D. Gunning and David W. Aha, “DARPA’s explainable artificial intelligence (XAI) program,” AI Magazine, vol. 40, no. 2, pp. 44–58, 2019. https://doi.org/10.1609/aimag.v40i2.2850

Downloads

Published

2026-02-05

How to Cite

Banka, D., Orosz, T., & Ritzl, A. (2026). Integrating Explainable AI into Model-Driven and Low-Code Enterprise Applications. Acta Technica Jaurinensis. https://doi.org/10.14513/actatechjaur.00929

Issue

Section

Research articles