07 Sep 2023
Four keynotes give delegates updates and detail commercial opportunities of diverse demanding markets.
by Matthew Peach in AmsterdamSPIE Sensors + Imaging conference and exhibition in Amsterdam, The Netherlands, combined two of Europe’s best photonics conferences – SPIE Remote Sensing and SPIE Security + Defence.
The associated expo showcased the latest sensor and photonic technologies for imaging and monitoring the Earth’s atmosphere and environment, as well as sensor technologies addressing homeland security and defense.
To complement this diverse offer, Monday’s plenary sessions covered the two main themes – military photonics requirements and earth observation techniques – with an eye-catching third plenary on developments in artificial intelligence and how it could be exploited and controlled.
“A Dutch perspective on space situational awareness (SSA) for defence”, was delivered by Marko Noppen, of the Dutch Ministry of Defense, Defence Space Security Center, supported by Ivan Ferrario, of TNO, The Netherlands’ national research laboratory.
With the recent establishment of the Dutch Defence Space Security Center (DSSC), the MoD is tasked with the development of Space Situational Awareness for which a variety of systems are required. Noppen gave a breakdown of the need for SSA from a MoD perspective and how the DSSC strives to develop this capability.
Noppen works for the Defence Space Security Center where he is responsible for the development of space and ground based systems with an aim to provide direct support to MoD operations.
He told the audience: “Situational awareness is the knowledge and characterization of space objects and their operational environment. For us from a practical perspective, this simply means that we want to know where it is what it is, and what does it do.
“So how do we do it? We use sensors throughout the whole electromagnetic spectrum. We are setting up projects to cover all sorts of aspects and combine the information that we come up with that we can say more about what an object is in space and what it does, because we don’t only want to see that there is an object; most important to us is what does it do?
Noppen continued, “Of course, everybody has been ramping up their space capabilities in recent years. One thing I want to highlight is that the Chinese themselves in particular are planning to launch up to 100 rockets, not satellites, in the coming few years.
He concluded, “European projects are more open to having greater cooperation between the commercial and military sides, which is really good news for us and I think both communities will benefit from it.
“There’s one project that we want to highlight with regards to imaging and especially for optical wavelengths, and that is European defense. There is a lot of money – hundreds of millions of euros – going towards the development of new optical sensors from ground to space to simply keep track of everything that is happening,” Noppen said.
Insight towards future requirements was delivered by the second speaker, Ivan Ferrario, of TNO, an aerospace engineer with more than 20 year experience as project and program manager and Business Development, and almost ten year experience specifically in Laser Satellites Communications. Ferrario started and managed the Optical Communications Program in TNO in 2016. Since 2020 he also started and is managing the Space Situational Awareness program.
The challenge of controlling AI
The highlight of the plenaries, arguably, was delivered by Peter Werkhoven, Chief Scientist at TNO. His fascinating and thought-provoking talk was entitled “Meaningful control of intelligent systems”.
He announced, “The rapid evolution of AI technology is well on its way to being able to support and even replace humans in various tasks. Exploiting the application potential, with simultaneous mitigation of systemic risks to our society is proving to be a major challenge.
“How do we tell autonomous AI systems effectively what to do? How do we make them function in a responsible and morally acceptable way? To what extent can we demand explainability,” he asked.
Peter Werkhoven holds a PhD degree in psychophysics and is professor in Computer Science at Utrecht University in the fields of Augmented Reality and AI safety. He also served as a member of many boards, including the NATO Research and Technology Board, the Board of the Topconsortium High Tech Systems and Materials, the supervisory board of PhotonDelta, and currently the Strategy Team of the National AI Coalition, and the Advisory Board Innovation & Technology Netherlands Police.
Werkhoven summarized his extensive investigation of the capabilities and implications of artificial intelligence thus: “Considering AI, it is complex and it’s a lot of work but that’s the only way to meaningfully control it. We have talked about raising intelligent autonomous systems that are morally-aligned, intelligent machines. We talked about demanding transparency and explainability; we want AI-based systems to be able to know what they do and to be able to explain that to us. Also with respect to their responsibilities.
“We come from a stage in human evolution, where evolution was just natural selection. We innovated ourselves and our environments almost randomly, purposefully, but without any understanding, which was a slow and a very costly process.
“But we are now in a phase where we have intelligent design. We know a lot of things and we can communicate about it with each other. We can set goals, from the top down. We base our innovation and intelligent design on understanding.
“But the third phase could be the support of AI, which is a thinking tool that can enhance human competences. But perhaps without understanding because either AI systems are not able to explain what they are doing, but you cannot exclude situations where AI systems are able to explain what is going on, when humans can no longer understand such developments – because they are beyond the limits of our cognition.
“There can be laws of physics that exist but which we cannot understand. There can be complexity in innovation systems that we would like to know. So what do we do? Do we take the advice of AI and set into motion those innovations that can solve, for example, the ecological problems that we are facing at the moment. Are we going to trust AI based on its track record? Or are we going to say no, we don’t understand and we won’t follow that. This is an interesting ethical question in itself that I would like to use as the close of my presentation.”
The final presentation – ESA Earth Observation Programme: highlights, opportunities and challenges – was given by Dr. Nicolaus Hanowski, of ESRIN, the European Space Agency in Italy.
Hanowski holds a Ph.D. in Earth and Planetary Sciences. He started as a researcher in the US and then transferred to Mission Operations at the German Space Operations Centre. In 2009 he joined ESA and became responsible for the development of Science Ground Segments for missions, such Rosetta, Gaia, JWST, Solar Orbiter and Euclid in the Science Directorate.
He said, “Recently, ESA Member States have subscribed to funding Earth Observation activities on the scale of 2.7 billion euros, over the next three years.”
“Corresponding workplans have been approved in early 2023 and numerous activities in the domain of Earth Observation satellites, ground segments, data management, science and applications are proceeding and are now open for participation,” he said.