||Recurrent Neural Networks (RNNs) are a type of Artificial Neural Network which support backward connections and sometimes form cycles in the underlying graph. Despite their relative success, the existing learning algorithms do not consider stability issues as part of their scheme, they only include aspects related to the final precision of the inference process. The problem of stability in RNNs persists as an obstacle for mathematicians and the absence of an exact solution evidences the need for alternative approaches inspired by approximate techniques. Also, one key aspect in some RNNs such as Fuzzy Cognitive Maps (FCMs) refers to its white-box behavior, since human can intuitively follow the reasoning process. Our proposal aims at reducing vagueness during network construction given
that decision makers express their knowledge in different ways. Also, we want to increase network's performance by means of using discoveries about FCM's behavior and designing a new backpropagation algorithm for the learning phase. Finally, we will propose an interpretable framework in model, inference process and decision. By accomplishing the aforementioned objectives, our goal is to design, build and analyze decision models using Recurrent Neural Networks under Explainable Artificial Intelligence's premises.