Hot Keywords
Complex Engineering Systems Nonlinear Digital Twin PID Vehicle Prediction Fault Diagnosis

Topic: Explainable AI Engineering Applications

A special issue of Complex Engineering Systems

ISSN 2770-6249 (Online)

Submission deadline: 30 Nov 2022

Guest Editor(s)

  • Dr. Tim Arnett
    Thales Aerospace, Cincinnati, OH, USA.

    Website | E-mail

  • Dr. Kelly Cohen

    Department of Aerospace Engineering & Engineering Mechanics, University of Cincinnati, Cincinnati, OH, USA.

    Website | E-mail

  • Dr. Barnabas Bede

    Department of Mathematics, DigiPen Institute of Technology, Redmond, WA, USA.

    Website | E-mail

  • Dr. Javier Viaña Pérez

    Department of Aerospace Engineering and Engineering Mechanics, University of Cincinnati, Cincinnati, OH, USA.

    Department of Physics, Massachusetts Institute of Technology, Cambridge, MA, USA.

    Website | E-mail

Special Issue Introduction

The ability to explain the reasoning of an algorithm is critical. This feature of AI architectures is known as explainability. Without it, the outputs generated by a model are meaningless. Currently, most industrial engineering systems resort to opaque or unexplainable algorithms. Thus, a large percentage of the data-driven decisions made in the world have no justification. Not only for certification and reliability reasons, but also supervision, it is necessary to change this paradigm and encourage the use of transparent AI. There are two main streams dedicated to this field of explainability. The first consists of modifying algorithms that are already opaque so that in some way, with these new modifications, they are a little more interpretable. The second variant consists of proposing novel algorithms whose mathematical operations are from the very core easy to understand from the human perspective, while simultaneously achieving high performance. Although the second approach is more complicated, it offers a higher long-term return, as it pushes the boundary of what is known.

In this Special Issue, we aim to cover a selection of theoretical and experimental research that advances the development of explainable and transparent AI algorithms. Topics include, but are not limited to the following issues:
● Fuzzy logic
● Genetic fuzzy systems
● Bio-inspired evolutionary optimization algorithms
● Soft computing
● Multi-agent systems
● Intelligent controls
● Gradient-based interpretable architectures
● Neuro-fuzzy hybridization

Submission Deadline

30 Nov 2022

Submission Information

Articles of special issue are free of charge for article processing.
For Author Instructions, please refer to
For Online Submission, please login at
Submission Deadline: 30 Nov 2022
Contacts: Wen Zhang, Assistant Editor,

Published Articles

This special issue is now open for submission.
© 2016-2022 OAE Publishing Inc., except certain content provided by third parties