In this interview, Shagun Garg, a Doctoral Researcher at the University of Cambridge, shares his journey working at the intersection of water and space technologies. From early experiences with groundwater-related land subsidence in Delhi to improving flood detection methods, his work highlights the advantages and limitations of satellite data in tackling real-world water challenges.
Shagun discusses how nature-based solutions, remote sensing, and machine learning come together in his current research to support more sustainable water management. He also reflects on the importance of inclusive approaches that don’t leave out regions or people due to technical constraints. Throughout, he emphasises curiosity, collaboration, and the value of noticing what others might overlook.
How do you personally and professionally relate to water and/or space technologies?
Personally, I’ve always been aware of how important water is. Growing up in India, we didn’t have unlimited access: water supply was limited to certain hours, and that was just part of daily life. So, from early on, I understood that water is something to value, not take for granted.
Professionally, my journey truly started with the 2013 Kedarnath flood tragedy in Uttarakhand, my home state in India. It was caused by what is known as a glacial lake outburst flood, or GLOF – basically, water held back by a natural dam formed by landslides or debris, which suddenly bursts open, causing widespread destruction downstream. What really moved me was learning from the news that early warning signs could have been identified using satellite imagery. That discovery truly ignited my interest in remote sensing and its potential for disaster management.
Can you tell us about your current position as a Doctoral Researcher at the University of Cambridge?
I'm working on my PhD at Cambridge, where I'm combining water management and remote sensing. There's this really exciting global movement toward nature-based solutions – things like river restoration, pond rejuvenation, and wetland protection – to address floods, droughts, and other water-related risks. The challenge is, despite all the enthusiasm, we often lack solid evidence about how effective these solutions really are. In my PhD, I'm investigating ways to use satellite data to generate concrete evidence by tracking changes over time, measuring impact, and helping inform better decision-making.
How are space technologies used to monitor Nature-based solutions (NbS)? How do NbS contribute to sustainable water management?
You know, it's fascinating how much attention nature-based solutions are getting right now, both in industry and academia. We also need to have strong evidence around it to learn what works and what doesn’t. This will also help us make a strong case for investing in these solutions. Satellites allow us to consistently monitor changes in land cover, water extent, vegetation health, and even surface deformation across large areas and through time. This is especially valuable in regions where on-the-ground monitoring is limited or too expensive.
Let me give you some examples: we can see if a restored wetland is doing its job by holding water longer, whether a pond is maintaining its size, or if areas downstream from our interventions are experiencing less flooding. This kind of concrete evidence is absolutely crucial if we want to make nature-based solutions a mainstream part of our water and climate strategies.
Please tell us about your work on land subsidence due to groundwater overextraction in Delhi. How are satellite applications used to study the effects of decreasing groundwater levels?
That's actually one of my favourite projects to talk about! During my time at the Indian Institute of Technology (IIT) Bombay and Leibniz University Hannover, we were investigating ground movement in Delhi, where the ground is literally sinking – we call it land subsidence. It happens because the city is pumping out groundwater way faster than nature can replenish it, causing the underground soil layers to compact.
The numbers we found were pretty alarming: In some parts of Delhi's National Capital Region, the ground is sinking by about 17 centimeters per year (Fig 1(d-e)) . Now, I know that might not sound dramatic – you can't exactly see it happening day to day, but over time, it's seriously concerning. We're talking about potential damage to buildings, roads, and all sorts of underground infrastructure.
But I also have a happy example to share: In one area called Dwarka, we discovered something encouraging. This area used to be sinking quite rapidly, but our data showed that the ground had actually stabilized, and even started bouncing back a bit (Fig 1(f)). Results suggest this improvement is linked to some smart local policies: they cracked down on illegal groundwater pumping and made rainwater harvesting mandatory in addition to reviving old water bodies in the region.
It's a perfect example of how satellite data isn't just about identifying problems – it can also show us when our solutions are working!

What are other challenges related to water management in India and how can satellite remote sensing or other space-based technologies help?
Well, India's water challenges are quite complex, really! We're dealing with everything from groundwater depletion and seasonal droughts to devastating floods and inequitable water access. And these issues aren't happening in isolation – they're being amplified by rapid urbanization, climate change, and economic disparities.
Remote sensing is already proving incredibly valuable. We are using it to map flood risks, track drought conditions, figure out identify irrigation needs, and monitor surface water availability. With technology evolving rapidly, the satellite resolution is getting better: with spatial resolution, more frequent observations, and faster data processing than ever before.
But the real challenge in all this is: How do we ensure all this satellite data actually makes it into the hands of decision-makers in a way that's clear, easy-to-use, trustworthy, and actionable, and also how to include local knowledge and ensure its solutions are designed with the end-users in mind.
You previously worked with the German Aerospace Center (DLR) and GFZ Potsdam, where you developed new techniques to improve flood detection using radar imagery. Can you tell us about your findings? And in general, how is flood detection and flood monitoring performed with remote sensing?
The basic concept of satellite-based flood mapping is simple – we compare satellite images from before and after a flood to see what's changed. But during floods, it's usually cloudy. And optical satellites can't see through clouds. Ironic, isn't it? Just when we need the data most, we can't get it!
So, we turn to radar imagery, which can penetrate clouds and capture the Earth's surface regardless of weather or time of day. But radar comes with its own challenges. In radar images, both open water and dry sandy land appear dark because of how the signal interacts with the surface (Fig 2 (a-b)). So, in arid or semi-arid regions—like parts of Australia, the Middle East, or Africa—floodwaters and bare soil can look almost identical. This makes accurate flood mapping very difficult (Fig 2(c)).
That’s the problem we were trying to solve. Instead of just looking at brightness (or amplitude), we asked: what else can we learn from radar data to better separate flood from non-flood? And that’s what led to our method using radar coherences, something I explored further in my published work.

In your publication, you proposed a new method using radar data to improve flood detection in arid regions. Could you tell us what you did and what your main findings were?
Yes - so to deal with the challenge of water and dry sand looking similar in radar images, we proposed using not just the radar amplitude (which shows how strong the signal is), but also radar coherence. Coherence also takes into account the ‘phase‘ information which can be used as a proxy to identify how much the surface has changed between two radar images. During a flood, the surface changes, hence the coherence drops.
By combining both amplitude and coherence from Sentinel-1 data, we were able to better separate flooded areas from dry sandy land, especially in arid regions where traditional methods often fail. This helped reduce misclassification errors significantly.
To check whether our methodology really worked, we tested it on three real-world flood events in arid regions. Across all three cases, the results were consistent—we saw a 15–25 per cent improvement in accuracy compared to existing methods. That was really encouraging, and it showed that with the right approach, even difficult regions can be mapped more reliably.
In your work you talk about inclusive flood mapping? What exactly is that? How can flood mapping become more inclusive and who is not included in most approaches today?
So, when I say inclusive flood mapping, I am mostly talking about places. Let me explain: In a lot of current flood mapping approaches, regions like arid and semi-arid zones are excluded - not intentionally, but because the methods just don’t work well there. The land is tricky to classify, and the models often fail to detect floods accurately, so those areas are often left out of the maps altogether.
In our recent work, we tried to challenge that. We asked: what if we could improve the methods so that these “excluded” regions are also part of the flood map? That’s what we mean by inclusion—not giving up on difficult areas just because the data is noisy or ambiguous.
Of course, this also connects back to people. If a flood-affected area doesn’t show up on the map, that community might not get support, or their risks might be underestimated. So inclusive flood mapping is really about making sure no region is left out because of technical blind spots.
What role does machine learning play in your research?
Machine learning is becoming a key part of remote sensing. When you’re dealing with large volumes of satellite data, it’s just not possible to go through everything manually. That’s where machines really help—they can pick up patterns that are hard for humans to see and do it at scale.
In my research, I use machine learning mostly for flood and water mapping. You can train a model by showing it examples of how flooding looks in satellite images, and then, when a new flood happens, the model can quickly generate a map. It might not be perfect, but it saves a lot of time, and with more data and better techniques, it’s getting more accurate too. And in emergency response, speed can be just as crucial as perfect accuracy.
What are three skills you use daily in your research?
Problem-solving is definitely the first one: because things rarely work on the first try. Whether it’s a bug in the code or a strange result in the satellite data, I’m constantly trying to figure out what’s going wrong and how to fix it.
Second would be remote sensing and data processing. A big part of my work involves writing scripts, handling large datasets, and making sense of satellite imagery - so that I’d say is a daily routine.
And third, definitely communication. Whether it’s writing papers, discussing ideas with colleagues, supervisors or preparing presentations, explaining the science clearly is just as important as doing it.
What do you need to innovate?
For me, it starts with curiosity. I believe innovation often comes from noticing the small things that others might overlook—something that doesn’t quite fit, or doesn’t make sense, and then digging deeper into it.
But you also need the right support system—mentors, collaborators, and a space where you can try ideas without fear of failure. And of course, you need time. Time to think, explore, and build something that actually works.
What is your favourite aggregate state of water?
Unmapped Water.
Not your typical answer, I know. But these hidden waters – the floods we miss, the ponds not in our databases, the shrinking lakes at the edges of satellite images - they're where the real work begins.