Cognitive spaces – Next generation explainability
Can artificial intelligence algorithms learn to communicate in a language we understand?
Machine learning algorithms are often perceived as complex black boxes and much research has already gone into opening the black box to explain what has been learned from data. The communication aspects of explainable AI have attracted less attention. The cognitive spaces project is aimed at relating AI explanations better to given user groups and effectively let the algorithms speak the user’s language. We will realize the vision by aligning learned representations of data with formal human knowledge graphs. We hope to understand and push the limits to deep learning interactivity by theoretical and experimental analysis, design of new learning schemes to enable knowledge aware models and explanation.
Our primary use case concerns cognitive spaces for deeper understanding of electric brainwaves (EEG). These signals are of increasing diagnostic importance and EEG signals play a fundamental role in neuroscience. In an ambitious attempt to understand EEG models better we will use cognitive space methods for real-time “captioning” of the brainwave signal.