The US Air Force is now operating a powerful airborne imagery sensor called the Gorgon Stare Increment 2 pod, allowing a single unmanned aircraft to monitor in high-resolution everything that moves over a 100km2 area for several hours at a time.

Sierra Nevada, the integrator of the Gorgon Stare pods, says on 1 July that the Increment 2 system installed on the General Atomics Aeronautical Systems MQ-9 Reaper passed the air force’s initial operational capability milestone earlier this year.

Achieving operational status fulfills a five-year vision propelled into motion by then-Secretary of Defense Robert Gates, who had grown impatient with a perceived lack of surveillance support by the US Air Force in Iraq and Afghanistan.

Gates’ impatience ultimately led the firings of Secretary of the Air Force Michael Wynne and chief of staff General Michael Moseley. A new USAF leadership quickly launched two programmes – the manned MC-12 Liberty fleet and Gorgon Stare for the MQ-9 fleet – in 2009.

Both projects were aimed at addressing a lack of overhead surveillance for tactical forces on the ground. The USAF’s fleet of MQ-1B Predators, equipped with single sensor with a narrow field of view, was criticized for providing a “soda straw” view of the ground.

Battlefield commanders wanted a sensor that could provide constant surveillance over a large area, detecting, recording and tracking all moving objects.

The Gorgon Stare increment 1 system, fielded in March 2011, was the air force’s first step. The 61cm (24in)-diameter turret developed by Exelis provides persistent coverage of a 16km2 area, while simultaneously providing multiple feeds of different spots requested by users on the ground.

The air force always planned to upgrade with the increment 2 system, by incorporating a new sensor developed jointly by the Defense Advanced Research Projects Agency and BAE Systems called the ARGUS-IS.

Coverage area provided by the ARGUS-IS – autonomous real-time ground ubiquitous surveillance imaging system – grows to up to 100km2. It fuses data collected by 368 cameras capable of capturing 5 million pixels each, to create a composite image of about 1.8 billion pixels, according to Lawrence Livermore National Laboratory, whose research led to the DARPA programme.

The video collected by the sensor at a rate of 12 frames per second creates several terabytes of data every minute, which is compressed by software to an amount capable of being processed by a few hard drives.

“We've rapidly fielded a never-before built, state-of-the-art system that delivers urgently needed and unprecedented warfighting capabilities,” Dave Bullock, vice-president for Sierra Nevada’s Persistent Surveillance Systems.

Source: FlightGlobal.com