(credit: PHOTOCREO Michal Bednarek)

The Integration of AI-agents within the space workforce: Say hello to HAL 2.0!

by Audrey Berquand

In this article, we debunk some AI myths, discuss what space virtual assistants look like in 2020 and where the research is currently heading, addressing the limitations of this emerging technology.

Artificial Intelligence (AI) fuels our imagination, stimulating either fear (think terrifying Ava from ExMachina) or wonder (think friendly robot Sonny from I, Robot). Some of you already interact with an AI-based virtual assistant on a daily basis, whether you call it Siri, Alexa, Google Assistant, or Cortana. Although your virtual assistant can perfectly handle your hairdresser appointment, it would be of lesser help to design the next Mars rover. You would need a domain-specific virtual assistant understanding space systems’ concepts.

Ava from the movie ExMachina and Sonny from I, Robot: Two visions of AI potential evolution. Credit: Universal Pictures, Twentieth Century Fox Film Corporation.

Myths surrounding AI are often linked to the lack of distinction between ‘strong’ and ‘weak’ AI. Strong AI refers to AI exhibiting human-level intelligence, able to autonomously reason, continuously learn and act as a human. This AI would need to combine several branches of AI, for instance, computer vision and Natural Language Processing (NLP, e.g., understanding of human/natural language). Weak AI can only handle one task at a time. For instance, a computer vision algorithm able to identify wells in Africa cannot transition to monitoring migration herds without retraining its neural network weights. At the time of writing, we are still far from ‘strong’ AI. Hence, what does a “space” virtual assistant actually look like in 2020?

Figure : CIMON-2 on the ISS with ESA astronaut Lucas Parmitano (credit: ESA/DLR/NASA)

The answer is floating in microgravity in the opposite figure. CIMON-2, short for Crew Interactive Mobile companion, is an AI-based astronaut assistant developed by the German Aerospace Center (DLR), Airbus, IBM and the Ludwig Maximilians University of Munich [1, 2]. When humanity ventures beyond Low Earth Orbit, communication delays will eventually prevent live monitoring and support of crews by ground control. This is precisely when virtual assistants, or expert systems, such as CIMON will become essential team members. Expert systems store large amounts of technical knowledge and mimic experts’ reasoning, which will allow human-robot interaction to replace on-ground experts in providing quick answers and safeguard mission success.

Virtual assistants are however more than just extremely brainy co-workers, they can also express emotional intelligence, learning from their interactions with humans. Relying on sentiment analysis, CIMON-2 can indeed assess the affectionate state of its human teammate, express empathy if needed and help the crew overcome stress linked to long duration missions. On the American side, NASA has developed R5, aka Valkyrie, a semi-autonomous robot designed to operate in hostile environments such as Mars [3]. We can bet that these assistants will play a key role in supporting future space exploration missions and the new Moon race.

Figure: NASA’s Valkyrie (credit: NASA)

The potential of integrating AI-based virtual assistants does not limit itself to astronaut support, as AI-agents can also play a critical role in decision-making at the early stages of mission design. Over the past decades, we have accumulated large amounts of knowledge, mostly unstructured textual data, on space mission design. Searching for information has therefore become highly time consuming, a monotonic task that can be taken over by virtual assistants, facilitating access to past knowledge and even inferring new knowledge.

NLP, and its subset NLU (Natural Language Understanding), is the AI-branch enabling algorithms to grasp the context and the meaning of information stored in text format. The Design Engineering Assistant (DEA), a project led at the University of Strathclyde (Scotland) and supported by the ESA Concurrent Design Facility (CDF), is an expert system aiming to support space systems engineers during feasibility studies [4, 5]. Relying on NLP/NLU and Deep Learning methods, the DEA can traverse and “understand” the content contained in past missions reports, books or even journal publications. Extracted information is stored in a Knowledge Graph equipped with an inference engine allowing to infer implicit knowledge from explicit concepts, yielding new insights into accumulated knowledge. The DEA can, for instance, assess the similarity of a current mission with past missions, a task usually manually performed by the system engineers.

The DEA has two notable American ‘cousins’: Daphne, developed at Texas A&M University [6] , and SEVA, the Systems Engineer’s Virtual Assistant developed at George Mason University [7] in cooperation with NASA. Daphne specializes in designing Earth Observation satellite missions. Among other tasks, Daphne can make recommendations about how to design new satellite missions and can assess the weaknesses and strengths of a mission plan, weighting in a historical database of past Earth observation missions. SEVA, like DEA, focuses more on information contained in textual data. It aims to support system engineers in their daily work by managing complex information and provide high-level question answering. To shortly mention later development stages, the European startup VisionSpace Intelligent Systems recently released Karel.ai, a conversational assistant for operations of complex systems, notably supporting spacecraft operations.

By now you are hopefully convinced that virtual assistants are the best co-workers of tomorrow, and you might be wondering what stands between you and this wonderful future of human-robot cooperation. Data is an AI enabler, but it can also be its ruin. AI is still largely biased by the data it is fed, often inheriting its biases from humans themselves. As goes the computer science saying: ‘garbage in, garbage out’… In the space field, we simply lack open source labeled datasets to train or finetune our models, and we need large databases to train accurate models.

But things are about to change, after a massive success in the field of computer vision, transfer learning is now revolutionising the NLP/NLU landscape. Transfer learning is the ability in Deep Learning to train a model on one task, usually with a large dataset, but then leverage this knowledge to learn another task, usually with a much smaller dataset. With the advance of models such as BERT and GTP-3, it will soon be possible to increase the performance of models with smaller datasets. However, we still have to overcome two major challenges: data silos (isolation of information) and knowledge hoarding (unwillingness to share information with others).

In addition, we need to build the trust relationship between the AI and its users. For such a relationship to be prolific, transparency and interpretability are key. AI is often seen as a black box because the backend processes are usually very difficult for users to understand. Interacting with an incomprehensible system can create discomfort and distrust, which is problematic for AI systems involved in critical decision making. But promising new research is ongoing to expand the explainability of virtual assistants [8] and overcome the human-machine interaction challenges.

Figure: TARS robot supporting astronauts Cooper and Brand
investigating an ocean world in Interstellar (2014).

To conclude, AI-based virtual assistants have the potential to become essential assets of the space workforce. However, virtual assistants remain tools to support and enhance humans’ work. They are still too immature and weak to completely replace human operators, requiring considerable human validation work. In the near future, we can rest assured that humans are still needed in the loop. With the continuous improvement and diversification of AI algorithms, one can however wager that virtual assistants will, in the next decade, blend in at all steps of a mission’s lifecycle. Virtual assistants will facilitate the handling of complex models, flagging design flaws at the early development stages, quickly assessing new design options, computing trade-offs and contributing to converging faster towards sound designs. Autonomous reasoning will enable more intricate testing scenarios, and relieve the workload of human operators.

Finally, humans and robots will be working side-to-side on the Moon and Mars, fulfilling our endeavour to become an interplanetary species.

Note: This article could only focus on AI-based agents, mostly involving NLP/NLU. However, AI is a wide field and many of its branches find applications in the space sector, Earth observation is already heavily relying on it for data analysis. To only cite a few recent exciting achievements, ESA recently launched ɸ-sat, an AI chip flying onboard one of the FSSCat mission’s CubeSat. With ɸ-sat, the payload now has a “brain” to pre-process the data onboard, filter images with clouds, and avoid sending back useless data [9]. AI also increases rovers’ autonomy. The NASA rover Perseverance, currently on its way to Mars, features an upgraded component from the Curiosity rover which should allow it to identify targets autonomously [10]. With the integration of AI in support of space activities, exciting new opportunities are materialising!

 

 

Audrey Berquand is a third-year PhD student at the University of Strathclyde (Glasgow, Scotland) on the application of Natural Language Processing/Understanding and Deep learning at the early stages of space mission design. If you’re interested in discussing virtual assistants in the space field, you can contact her via LinkedIn.

 

 

References:[1] DLR (2020) CI­MON-2 makes its de­but on the ISS. Online at https://www.dlr.de/content/en/articles/news/2020/02/20200415_cimon-2-makes-its-debut-on-the-iss.html (as of July 2020)[2] Airbus (2020) CIMON-2 makes its successful debut on the ISS. Online at
https://www.airbus.com/newsroom/press-releases/en/2020/04/cimon2-makes-its-successful-debut-on-the-iss.html (as of July 2020)[3] NASA (2015) R5 features. Online at https://www.nasa.gov/feature/r5/ (as of August 2020)[4] Berquand, A., Murdaca, F., Riccardi, A., Soares, T., Generé, S., Brauer, N., and Kumar, K. (2019). Artificial Intelligence for the Early Design Phases of Space Missions. In Proc. 2019 IEEE Aerospace Conference, Big Sky, MT, USA, doi: 10.1109/AERO.2019.8742082.[5] Berquand, A., Moshfeghi, Y. & Riccardi, A. (2020). Space mission design ontology:
extraction of domain-specific entities and concepts similarity analysis. In Proc. of AIAA Scitech
2020 Forum, Orlando, FL, doi: https://doi.org/10.2514/6.2020-2253[6] Viros Martin, A. & Selva, D. (2019). From Design Assistants to Design Peers: Turning Daphne into an AI Companion for Mission Designers. In Proc. of AIAA Scitech 2019 Forum, San Diego, Cal, USA, doi: https://doi.org/10.2514/6.2019-0402[7] Krishnan, J., Coronado, P., Reed, T. (2019) SEVA: A Systems Engineer’s Virtual Assistant. In Proc. of the AAAI 2019 Spring Symposium on Combining Machine Learning with Knowledge Engineering (AAAI-MAKE 2019). Stanford University, Palo Alto, California, USA.[8] Viros Martin, A. & Selva, D. (2020) Explanation Approaches for the Daphne Virtual Assistant. In Proc. of AIAA Scitech 2020 Forum, Orlando, FL, doi: https://doi.org/10.2514/6.2020-2254[9] ESA (2020) First Earth observation satellite with AI ready for launch. Online at https://www.esa.int/Applications/Observing_the_Earth/Ph-sat/First_Earth_observation_satellite_with_AI_ready_for_launch (as of August 2020)[10] Burghardt, T. (2020) America’s Next Mars Rover “Perseverance” to Pave Way for Human Exploration Online at https://www.nasaspaceflight.com/2020/03/americas-mars-rover-
perseverance-pave-human-exploration/ (as of August 2020)