
Hindawi Publishing Corporation
EURASIP Journal on Embedded Systems
Volume 2007, Article ID 80141, 14 pages
doi:10.1155/2007/80141
Research Article
Reconfigurable On-Board Vision Processing for
Small Autonomous Vehicles
Wade S. Fife and James K. Archibald
Department of Electrical and Computer Engineering, Brigham Young University, Provo, UT 84602, USA
Received 1 May 2006; Revised 17 August 2006; Accepted 14 September 2006
Recommended by Heinrich Garn
This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision
gives increased autonomous capability, but it requires substantial computing power that is difficulttoprovidegiventhesevere
constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support
research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports
real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D) map of the
environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver
high levels of performance, especially in embedded settings where space and power are significant concerns.
Copyright © 2007 W. S. Fife and J. K. Archibald. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
1. INTRODUCTION
Humans rely primarily on sight to navigate through dy-
namic, partially known environments. Autonomous mobile
robots, in contrast, often rely on sensors that are not vision-
based, ranging from sonar to 3D laser range scanners. For
very small autonomous vehicles, many types of sensors are
inappropriate given the severe size and energy constraints.
Since CMOS image sensors are small and a wide range of
information can be extracted from image data, vision sen-
sors are in many ways ideally suited for robots with small
payloads. However, navigation and control based primarily
on visual data are nontrivial problems. Many useful algo-
rithms have been developed—see, for example, the survey of
DeSouza and Kak [1]—but substantial computing power is
often required, particularly for real-time implementations.
For maximum flexibility, it is important that vision data
be processed not only in real time, but on board the au-
tonomous vehicle. Consider potential applications of small,
fixed-wing unmanned air vehicles (UAVs). With wing-spans
of 1.5 meters or less, these planes are useful for a variety of
applications, such as those involving air reconnaissance [2].
The operational capabilities of these vehicles are significantly
extended if they process vision data locally. For example, with
vision in the local control loop, the UAV’s ability to avoid
obstacles is greatly increased. Remotely processing the video
stream, with the unavoidable transmission delays, makes it
difficult if not impossible for a UAV to be sufficiently respon-
sive in a highly dynamic environment, such as closely fol-
lowing another UAV employing evasive tactics. Remote pro-
cessing is also made difficult by the limited range of wireless
video transmission and the frequent loss of transmission due
to ground terrain and other interference.
The goal of our work is to provide an embedded comput-
ing framework powerful enough to do real time vision pro-
cessing while meeting the severe constraints of size, weight,
and battery power that arise on small vehicles. Consider,
for example, that the total payload on small UAVs is often
substantially less than 1 kg. Many applicable image process-
ing algorithms run at or near real time on current desktop
machines, but their processors are too large and require too
much electrical power for battery-powered operation. Some
Intel processors dissipate in excess of 100 W; even mobile ver-
sions of processors intended for notebook computers often
consume more than 20 W. Even worse, this power consump-
tion does not include the power consumed by the many sup-
port devices required for the system, such as memory and
other system chips.
This paper describes our experience in using field-
programmable gate arrays (FPGAs) to satisfy the com-
putational needs of real-time vision processing on-board