Mint explainer: AI’s role in powering space missions like Chandrayaan-3
Further, India would also become the first country to land at the lunar south pole, an area that interests space agencies and private space companies because the discovery of water ice (which will yield hydrogen and oxygen) could provide air and potential fuel.
And artificial intelligence (AI) has a major role to play in such missions. For instance, even Chandrayaan-2 had planned to use the AI-powered ‘Pragyan’ (wisdom in Sanskrit)—a homegrown solar-powered robotic vehicle that would manoeuvre the lunar surface on six wheels. It comprised a Laser Induced Breakdown Spectroscope (LIBS) from the Laboratory for Electro Optic Systems (LEOS) in Bengaluru to identify elements present near the landing site, and an Alpha Particle Induced X-ray Spectroscope (APIXS) from the Physical Research Laboratory (PRL) in Ahmedabad that would have inspected the composition of the elements near the landing site.
Pragyan, which can communicate only with the Lander, included a piece of motion technology developed by IIT-Kanpur researchers that would help the rover manoeuvre on the surface of the moon and aid in landing. The algorithm was meant to help the rover trace water and other minerals on the lunar surface, and also send pictures for research and examination.
Even though the mission failed, Vikram and Pragyan are expected to deliver the goods this time around. Vikram’s AI algorithm will use data from the lander’s sensors to calculate the best landing spot and control the lander’s descent on the lunar surface by considering factors such as the moon’s surface, the lander’s weight, and the quantity of fuel that is left. Pragyan’s AI algorithm will use data from the sensors to plan the rover’s route, identify obstacles and avoid them. AI will be used to analyze the large dataset of images and other data collected by previous lunar missions.
Further, when India eventually launches Gaganyaan (human spaceflight), there will be two preliminary missions that will be unmanned. The first one will be totally unmanned while the second one will carry a robot called ‘Vyommitra’—a half-humanoid robot (described so, since it has no legs) that looks and talks like a human, and conducts experiments aboard the rocket.
These, of course, are simply cases in point how AI and robotics have been accelerating space exploration over the years.
For instance, Earth Observing-1 (EO-1)–a decommissioned (in March 2017) NASA Earth observation satellite–was built 19 years back to develop and validate a number of instrument and spacecraft bus breakthrough technologies. Among other tasks, EO-1 was also used to test new software like the Autonomous Sciencecraft Experiment, which allowed the spacecraft to decide for itself how best to create a desired image.
Similarly, along with the JPL AI group at the California Institute of Technology (Caltech), the Institute of Astronomy-University of Hawaii developed a software system called SKy Image Cataloging and Analysis Tool (SKICAT) that can automatically catalogue and measure sources detected in the sky survey images–to classify them as stars or galaxies and assist an astronomer in performing scientific analyses of the resulting object catalogs.
Alvin Yew, a research engineer at NASA’s Goddard Space Flight Center is teaching a machine to use features on the Moon’s horizon to navigate across the lunar surface. For instance, he is training an AI model to recreate features as they would appear to an explorer on the lunar surface using the Lunar Orbiter Laser Altimeter’s digital elevation models. LOLA measures slopes, lunar surface roughness, and generates high resolution topographic maps of the Moon.
John Moisan, an oceanographer at NASA’s Wallops Flight Facility in Virginia is developing an AI-powered ‘A-Eye’ that essentially is a movable sensor to interpret images from Earth’s aquatic and coastal regions. Moisan’s on-board AI would scan the collected data in real-time to search for significant features, then steer an optical sensor to collect more detailed data in infrared and other frequencies.
AI is also being used for trajectory and payload optimization. An AI known as AEGIS is already being used by NASA’s rovers on Mars. The system can handle autonomous targeting of cameras and choose what to investigate. However, the next generation of AIs will be able to control vehicles, autonomously assist with study selection, and dynamically schedule and perform scientific tasks.
Using the online tool AI4Mars to label terrain features in pictures downloaded from the Red Planet, you can train an artificial intelligence algorithm to automatically read the landscape.
AI’s ability to sift through humungous amounts of data and find correlations helps in intelligently analysing that data. The European Space Agency’s (ESA) ENVISAT, for instance, produced around 400 terabytes of data every year. On the other hand, astronomers estimate that the Square Kilometre Array Observatory (SKAO)—an international effort to build the world’s largest radio telescope located in both the South Africa’s Karoo region and Western Australia’s Murchison Shire—will generate 35,000-DVDs-worth of data every second, equivalent to data that the internet produces daily.
The James Webb Space Telescope, which was launched by NASA into an orbit of around 1.5 million kilometers from Earth, also involves AI-empowered autonomous systems overseeing the full deployment of the telescope’s 705-kilogram mirror. How would such mountains of data be analysed if it’s not for AI?
AI is also being used to increase the design efficiency of hardware components in space. As an example, research Engineer Ryan McClelland at NASA’s Goddard Space Flight Center has pioneered the design of specialized, one-off parts using commercially-available AI software to produce hardware components that weigh less, tolerate higher structural loads, and require a fraction of the time taken by humans to develop similar parts.
McClelland’s components have been adopted by NASA missions for astrophysics balloon observatories, Earth-atmosphere scanners, planetary instruments, space weather monitors, space telescopes, and even the Mars Sample Return mission. Goddard physicist Peter Nagler has used these components to develop the EXoplanet Climate Infrared TElescope (EXCITE) mission–a balloon-borne telescope developed to study hot Jupiter-type exoplanets orbiting other stars. 3D printing with resins and metals will unlock the future of AI-assisted design, according to McCellend (https://www.nasa.gov/feature/goddard/2023/nasa-turns-to-ai-to-design-mission-hardware).
AI is also taking giant steps in space with AI-powered astronauts too. Crew Interactive Mobile CompaniON (CIMON), the first AI-based astronaut assistance system returned to Earth on 27 August 2019, after spending 14 months on the International Space Station (ISS). CIMON was developed by Airbus, in partnership with tech company IBM, for the German Aerospace Center (DLR). It is a floating computer that was described as a flying brain by members of the Airbus team.
CIMON is able to show and explain (voice-controlled) information, instructions for scientific experiments and repairs, helping the astronauts to keep both hands free. CIMON’s ‘ears’ comprise eight microphones used to detect the direction of sound sources and an additional directional microphone for good voice recognition. Its mouth is a loudspeaker that can be used to speak or play music. Twelve internal rotors allow CIMON to move and revolve freely in all directions. This means it can turn towards the astronaut when addressed. It can also nod or shake its head and follow the astronaut either autonomously or on command.
CIMON-2 also uses IBM ‘Watson’ AI technology.
Download The Mint News App to get Daily Market Updates & Live Business News.
More
Less
Updated: 16 Jul 2023, 12:46 PM IST
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.