Guest Editor(s)
-
- Dr. Tim Arnett
- Thales Aerospace, Cincinnati, OH, USA.
Website | E-mail
-
- Dr. Kelly Cohen
Department of Aerospace Engineering & Engineering Mechanics, University of Cincinnati, Cincinnati, OH, USA.
Website | E-mail
-
- Dr. Barnabas Bede
Department of Mathematics, DigiPen Institute of Technology, Redmond, WA, USA.
Website | E-mail
-
- Dr. Javier Viaña Pérez
Department of Aerospace Engineering and Engineering Mechanics, University of Cincinnati, Cincinnati, OH, USA.
Department of Physics, Massachusetts Institute of Technology, Cambridge, MA, USA.
Website | E-mail
Special Issue Introduction
The ability to explain the reasoning of an algorithm is critical. This feature of AI architectures is known as explainability. Without it, the outputs generated by a model are meaningless. Currently, most industrial engineering systems resort to opaque or unexplainable algorithms. Thus, a large percentage of the data-driven decisions made in the world have no justification. Not only for certification and reliability reasons, but also supervision, it is necessary to change this paradigm and encourage the use of transparent AI. There are two main streams dedicated to this field of explainability. The first consists of modifying algorithms that are already opaque so that in some way, with these new modifications, they are a little more interpretable. The second variant consists of proposing novel algorithms whose mathematical operations are from the very core easy to understand from the human perspective, while simultaneously achieving high performance. Although the second approach is more complicated, it offers a higher long-term return, as it pushes the boundary of what is known.
In this Special Issue, we aim to cover a selection of theoretical and experimental research that advances the development of explainable and transparent AI algorithms. Topics include, but are not limited to the following issues:
● Fuzzy logic
● Genetic fuzzy systems
● Bio-inspired evolutionary optimization algorithms
● Soft computing
● Multi-agent systems
● Intelligent controls
● Gradient-based interpretable architectures
● Neuro-fuzzy hybridization
Submission Deadline
10 Dec 2022