DGI 2025

10 - 12 February, 2025

Queen Elizabeth II Centre, London

Commercial Imagery

Commercial Imagery and Temporal Analysis Interview with Digital Globe

Torturing to get extra information (out of a pixel)

Normally, we at DGI would never condone torture under any circumstances, but when it comes to squeezing the maximum information out of your geospatial dataset, the we’re all for it!

Making the most of your pixels – gathering the most data you can from your imagery is one of the most important factors in multi-int today and Digital Globe are on the frontier of research and development of new commercial imagery technologies, but before marching forward, you need to know your past.

To explain more, we spoke to Jack Hild, VP of US Defence Strategy at Digital Globe.

Video transcript:

One of the opening remarks at DGI was from Digital Globe. Perhaps could you tell us a little bit more about what was spoken about?

One of the things that we’ve been thinking about a lot lately is the four eras of commercial imagery, and with the constellation performing as well as it is, we’ve got a lot of collections, and what we’re trying to do is to understand the various eras so that we understand where the industry is going, but also then look at a couple of critical places that we think can benefit by this tremendous archive of commercial imagery.

Era 1 was resolution, and when you think back almost 40 years ago to what Landsat gave us, it was about an 80mm resolution. SPOT came along 26 years later; we got to about 10. The QuickBird and Ikonos broke the 1-metre barrier and now we’re in a different class of satellites that are getting in at under a half-metre.

We could do more if there was a desire by the customer base, but we really don’t see a significant increase in resolution in the foreseeable future.

Next was accuracy; and, again, you follow that same progression of Landsat to SPOT to Ikonos to the current class and you see a significant improvement in the accuracy of the imagery. And, again, could we do better? If the customer and user community said, “Yes, we want something more accurate,” we could do it, but we’re really getting near to the end of how useful that improvement would be.

This brings us to speed, which is the next era; and speed is the time of imaging until the time of receipt by a customer. In the old days at Landsat, you’d get something in the mail or somebody would drive a disk to you or a tape. Today, we’re seeing a significant improvement in speed, and that’s about where I think the era is today.

We have put groundstations in the northern areas, groundstations in Antarctica, we have additional groundstations being installed around the Equator, customers are using mobile groundstations to put something right at their point of use; and so we’re seeing a lot of dramatic changes in the ability of getting an image to a user in a very, very short amount of time.

This now brings us to the really exciting part, which is the fourth era, which is contact. General Clapper used to talk about this a lot: “How can you torture some extra information out of a pixel?” And when you look at the advanced MSI sensors that are up today, you’re seeing a lot of progress there. LIDAR is another sensor; hyperspectral is yet another one if you look out towards the future.

So, all in all, those four eras are kind of taking us from where we were to where we are now and I think they provide a really good insight to the future.

Yet isn’t one of the key problems with all of this is every time you increase the resolution, every time you add another layer of data, you’re talking about extra volumes of storage to the point where it’s increased exponentially a

nd looks set to continue to do so. Can the storage keep up?

Absolutely. The storage vendors will all tell you that they can keep up for sure. But I was with NGA in my previous life and I’ve been in situations where I didn’t have enough data and where I had far too much data; far too much data is a much better problem to have.

Think about what an archive can do for you: temporal analysis; being able to look at a piece of real estate backwards five years can tell you an awful lot about trends that are happening in that particular part of the world. So, I see temporal analysis being fuelled by this growth of imagery.

Likewise, when I look at automatic future extraction, the Holy Grail for this industry for a lot of years and we’ve continued to make evolutionary improvements to it, I think this added content, advanced MSI, the hyperspectral, the higher resolution, the higher accuracy: all are going to feed into a next step of improvements in the efficiencies of automating some of the things that we have to do.

And then finally is the visualisation. I think that we’re going to see new ways of visualising this very complex data to, in the end, get more information that will be more important to the user community.

Obviously, we’re all very excited about these huge developments taking place, but what are the real obstacles?

Well, I think that we need to accept the fact that more data is good. Whether it’s imagery or whether it’s just an explosion of open-source information that is out there… Twitter is a great example; duocode in Flickr is another example. So, there are all kinds of additional data sets.

I think the challenge is getting the tools in place, getting the training, making the data available to folks - there are a lot of smart analysts in this community; they’ll figure out what to do with it.

But what of standards though? Is this causing an issue or is this something that we’re now getting ourselves over?

I have to put a plug in for OGC. Those guys have been working for years to try to bring standards into this community. I think they have done a fabulous job, and in fact a lot of the stuff that you see here today is enabled by the work that’s been going on by a great group of standards guys for the last 10 or 15 years. So, yes, there’s always going to be a standards issue, but I think we’ve got everything under control and are making great progress.