Sometimes, the best way forward is to begin with a statement of the obvious. Consider this: “Landing is a challenging aspect of flight because, to land safely, speed must be decreased to a value close to zero at touchdown.” No aviator would dispute that, possibly excepting carrier pilots accustomed to smacking the deck (albeit with a perilously thin safety margin).

Starting with this premise, Emily Baird of Lund University in Sweden and colleagues looked at how flying animals taper their vertical speed to zero for touchdown. Studying honey bees in controlled laboratory conditions, the team concluded that bees’ method is simple and elegant, and suggests – as per the title of their paper published in the 4 November issue of the Proceedings of the National Academy of Sciences – A universal strategy for visually guided landing.

What bees seem to do when coming in to land vertically is to adjust their speed so as to hold steady the rate of expansion of their visual image of the surface they are approaching. The beauty of this method is that it requires very little input data or calculation: bees’ eyes are too close together to give a useful stereo view, and their nervous systems are rudimentary. As Baird notes, this method could apply equally well to a flying robot.

The mechanism is intuitively obvious, and easily tested by walking toward something. Get closer to an object and it appears progressively larger; the closer you get, the faster it appears to grow. Holding that rate of expansion constant, clearly, requires an ever-slower approach.

As Baird tells Flightglobal, it has been understood for decades that human pilots subconsciously use a variation of this effect when approaching a runway, and it has been shown that bees do the same in a “grazing” landing on a horizontal surface. As an aircraft, or bee, decreases altitude but maintains forward speed, the apparent fore-to-aft speed of the ground – its optical flow – increases. Progressively slowing the rate of descent holds that optical flow constant, so vertical speed becomes near zero on touchdown.


But while a grazing landing on a horizontal surface may be standard for a fixed wing aircraft, it is unusual for a bee – and in either case the optical flow method becomes very complicated on an inclined surface. So Baird and her colleagues set out to test their rate-of-expansion hypothesis by studying how bees manage to land safely when approaching a surface along a perpendicular path. To do this, they trained bees kept indoors to seek food on a vertical surface. Then they sought to trick the bees into thinking they were approaching faster or slower than they actually were, by manipulating the apparent rate of expansion of the image of that surface.

In trials using a surface painted with a checkerboard or concentric circles pattern, the bees’ approach speed slowed linearly as distance closed. When approaching a surface painted with an iron cross pattern, the bees began slowing down only when very close to the surface. When approaching a rotating spiral pattern, they decreased their approach speed more quickly or more slowly depending on which way the spiral was turning.

In all cases, the bees’ behavior supports the hypothesis that rate of image expansion is their cue.

Moreover, when the spiral was rotating so as to slow down the apparent rate of image expansion, the bees delayed their shift from cruise mode to landing mode.

The obvious alternative highlights the value of this very simple approach. By measuring flight speed and distance to target simultaneously on a moment-to-moment basis, it would be possible to reduce speed progressively. But, as Baird points out, that method would be computationally very demanding and, as noted already, is unsuitable for bees and flying insects with neither stereo vision nor vertebrates’ ability to change focal distance.

That complex approach is also inappropriate for unmanned aircraft. UAVs, especially small ones, are payload constrained, so it is clearly better if they can land safely without carrying special sensors and heavy-duty computer power. The same consideration applies, says Baird, to an automated sense-and-avoid system for a UAV – so for engineers, learning to mimic bees should be a priority.

What underpins bees’ ability to land safely avoid obstacles is, as Baird sums up, the ability to reliably detect motion in their environment. To do that takes the right sensor, and the ability to reduce “noise” in the signal. Bees and other insects, she says, have a built-in noise reduction system – they can move their heads to keep their eyes steady even if their bodies are moving. Human pilots are also good at that.

In Baird’s view, reducing this noise is the main obstacle to employing a simple rate-of-image-expansion technique on UAVs. A downward-aiming camera would most likely be fixed, which would make landing in all but still air very difficult. Imagine trying to make sense of a detailed image on a jiggling screen.

Fortunately, adequate image stabilisation may be the only significant obstacle to engineering a bee-style landing system. Rate of optical flow measurement algorithms already exist, and one of Baird’s colleagues, Mandyam Srinivasan of the University of Queensland, has already shown that these can be exploited to make an automated horizontal landing system that does not need know altitude or air speed.

Bees, after all, have an admirable flight safety record – and no instruments.