A new generation of geoimaging sensors is poised to change the geoimaging industry armando guevara
“Going the Way of the Mainframe: A New Generation of Geoimaging Sensors Is Poised to Change the
By Dr. Armando Guevara
When I say the word “mainframe”, what comes to mind? If you are contemporary to my college days, you
probably envision a hulking computing system that filled an entire room, serving as the beating heart of
IT operations for organizations back in the day before “IT” was even a term. If you are younger than that,
you might even ask, “What is a mainframe?” which in many ways is a perfect illustration of one of the
main themes of this article.
When I think back to those hulking machines and say the word “mainframe”, I do so with some fondness
in my voice because I’ve been around long enough to remember how they represented a huge leap
forward for the time. They also remind me of my free-spirited college days, which definitely adds to the
nostalgia. Mainframe computers were significant because they democratized computing power by
allowing companies of any size to take advantage of the computing power that was previously available
to only a very few of the most deep-pocketed commercial businesses and government agencies. They
allowed any organization to become part of the computer age, assuming you had a basement big enough
and an electricity budget large enough to support it.
Mainframes were powerful for their time, but no one from that era would argue with the reality that they
were expensive to buy, expensive to maintain and fix, bulky (to the nth degree), inflexible and resistant to
the emerging computing needs of organizations over time. They had all of those shortcomings, but
nonetheless those of us who started our careers in a different era have strong feelings of fondness for
them. They had a long run of technological supremacy and then they had to face what I call the 3-key
transforming paradigm shift vectors/forces of our time (“the 3 forces”): 1) data and all devices became
digital, 2) devices became increasingly smaller, and 3) devices became increasingly faster. Those three
forces converged so rapidly that within 10 years “mainframe power” was put in the palm of everyone’s
hands (along with a lot of other functionality, including the ability to make calls, take pictures, capture
video, listen to the radio, watch TV, get GPS coordinates and more—all spatially-enabled in a single
device). So with few exceptions, mainframe computers yielded to the 3 forces and made way for more
powerful, less expensive, more flexible technology descendants: mini computers, work stations, then PCs
and now handhelds1.
How does that relate to sensor technology? Sensors are undergoing a transformation that is remarkably
similar to what happened to mainframes. Monolithic, single-purpose sensors are in many ways the
mainframes of the geospatial geoimaging industry. They have been around forever (well, 10 years these
days is “forever”) and many of us have become attached to them, just like we did with the mainframes we
worked with back at the beginning of our careers. But a new generation of sensors is emerging and poised
to replace those single-purpose sensors just like the work station and PC did to the mainframe years and
For too long of a time in my mind, manufacturers have focused on selling single-purpose, monolithic,
mission-specific sensors (from EO medium <17 kps to large > 17 kps frame formats). Those sensors were
very good at what they were designed to do. Clearly that is true, otherwise there is no way they would
have lasted over a decade as they have. But those strengths came with weaknesses that geospatial
collection companies have had to patiently cope with in order to do their day-to-day jobs: lack of
flexibility, limited scalability, non-standards-based architecture, interoperability challenges, high up-front
Google: Guevara 1994, The Spatial Enabling of Information
costs, expensive maintenance costs and other downsides. Yes, there is a nostalgic beauty to singlepurpose sensors that are big hunks of elegantly designed metal built to do a specific job. I have an
undeniable admiration for them and their pioneering manufacturers, but I feel strongly that the
technological advancements, higher performance and dramatically lower cost of the next generation of
sensors will do to monolithic, traditional sensors, what PCs did to the mainframe.
Instead of using a single-purpose, monolithic design, these next-generation sensors are designed to be
multi-purpose, functionally-flexible and far more cost-effective. They are smaller, faster, and easier-towork with (naturally aligned with the 3 forces). They are also more adaptable to changing job
requirements, which is increasingly important to for collection companies who often have dramatically
different jobs during a single day using the same aircraft. Cost is also another big difference between the
traditional sensors and the new generation of solutions. Traditional sensors have high up-front costs and
are expensive to maintain because they do not have standards-based architectures and are difficult to
service in the field. In contrast, next-generation sensors are being built using standards and COTS
components that give them lower up-front costs and simpler, less expensive maintenance requirements
after they are put to work.
When I talk to peers about multi-purpose sensors, the topic of performance always comes up. Many folks
make the assumption that single-purpose sensors must have better accuracy or performance since they
only focus on one thing. Also they think that because they are economical they have less performance or
are of inferior quality. As my teenage son would say, the response to all of those assumptions is: “Not!”
The truth is that multi-purpose sensors are at least as good as traditional sensors, and aim to become far
better in terms of precision, collection capabilities and other key performance metrics. That removes the
biggest potential objection to next-generation sensors, which then allows people to focus on the topic of
cost and ease of use.
Old-style monolithic sensors are typically built with proprietary architectures that make them expensive
to buy and very costly to maintain and repair. A typical large area collection (more than 17 kps – kilo
pixel swath) EO geoimaging monolithic system can cost upwards of $1 million dollars and when the
system is on the fritz, it creates sizable opportunity costs as their airplanes sit idle for days as the unit is
shipped off for repair. In contrast, multi-purpose sensors are built on open architectures that use
commercial off-the-shelf components. That makes their up-front cost a fraction of what monolithic
systems cost, and maintenance and repairs are dramatically less time-consuming and costly since
components can be easily swapped out to complete repairs rather than sending off a unit to be repaired
Another attribute that gives next-generation, multi-purpose sensors a sizable advantage over monolithic
ones is scalability, both in terms of collection (from medium to largest) and functionality. Single-purpose
sensors are highly inflexible by their very nature. You get everything in one box, and because of that, you
pay for features you may not need, at least not when first bought. These “mainframe-like” sensors are
designed to do a single thing, and they aim to do that one job the best they can. But collection companies
today have to be flexible in adapting to the specs of each job they are hired to do. In a given day, they
may do multiple jobs that require very different types of imagery. This diversity of jobs is a direct result
of the growth of our industry.
As more industries learn about how to take advantage of geospatial imagery and embark to generate
revenue in applying the Science of Where, the variety of jobs for a collection company has grown in ways
that were unthinkable just a few years ago. They need sensors that can adapt from job to job and hour to
hour, and monolithic sensors are incapable of doing so. To increase the collection capability of one of
those traditional sensors, a collection company would need to buy a whole new sensor. And to add a
capability like Oblique/3D, multi-spectral or thermal et.al it would need to go out and buy or lease a
specialized sensor just for that job. That just doesn’t make sense today.
In contrast, next-generation multi-purpose sensors have scalable collection capabilities, allowing
collection companies to compete for larger project by increasing the size of their collection swath (“kps”).
Monolithic sensors have fixed kps, but multi-purpose sensors can scale up as needed on the fly without a
lot of hullabaloo to do so. They can do it in the field like flipping a switch in between jobs.
Functional scalability is also a huge advantage of multi-purpose sensors, allowing collection companies to
add infrequently-used collection capabilities on the fly if a customer needs them. As an example, a
collection company that is hired to do an agricultural imaging job once a quarter is often unable to do it
with their existing monolithic, single-purpose sensors. They need to go out and lease multi-spectral
sensors at great cost and effort just for that job, and they may not need them again for several months.
Multi-purpose sensors can add functionality on the fly, allowing a collection company to perform that
specialize job with their existing system.
The next big wave in our industry will be collection via new devices, such as unmanned vehicle systems
(UVSs) and smart-handheld “geospatial gadgets”, and multi-purpose sensors are ideally suited to support
those applications because these sensors can be miniaturized and their open platform makes it simple to
map to the software of UVSs and mobile devices. Old-style single-purpose sensors have an architecture
that is not compatible with these new applications, and I believe it will be a major driver to migration
away from monolithic sensors in the next few years in addition to the reasons I have outlined above.
Traditional sensors still have their strengths, and I believe they may continue to have a welcome home in
niche applications for a bit longer, just like some mainframe computers still continue to be used today.
But the cost pressure alone will be a huge factor in driving adoption of multi-purpose sensors. Collection
companies (especially outside the U.S. and Europe) typically do not have $1+ million dollars lying
around to spend on sensors platforms and costly “mainframe-like” IT processing environments—
particularly when multi-purpose sensors trend to cost at least 50% less while offering better performance,
flexibility and scalability.
Geospatial collection companies have important decisions to make about which type of sensor technology
is best performing for them and will meet their needs today, tomorrow and well into the future. The
solutions they select must be able to reduce costs, increase ROI, be multipurpose and reconfigurable as
needed in the field—and very importantly—be as digital-obsolescence-resilient as possible to extend
operation life and ensure a return on the investment.
About the Author:
Dr. Armando Guevara is the President and CEO of Visual Intelligence (www.visualintell.com), a
company that provides geoimaging solutions for airborne, terrestrial and mobile applications including
the iOne family of sensors and the iOne STKA, which won the 2013 Technology Innovation in Sensors
Award from the Geospatial Forum.