With the prevalence of installation of visual surveillance systems in the past decade, image-based fire flame detection has become a very important issue because it closely related to every people’s safety and property. In particular, if the initial combustion could be detected immediately, the damage would be greatly reduced. Today, most frequently used flame detection techniques are usually based on smoke sampling temperature sampling, and air transparency testing, in addition to the traditional ultraviolet and infrared flame detectors. However, most of these detectors suffer from some severe problems. First, they require a close proximity to the fire source. In addition, they are not always reliable, because they do not always detect the combustion itself. Instead, they detect the byproducts of combustion, which may be produced in other ways. Therefore, they usually result in higher false rates. In order to improve the fire detection rate, some researchers mixed the above techniques by adopting hybrid sensors, and some other researchers also applied soft computing techniques to assist such detection. However, all of these methods seldom provide additional descriptive information about flame location, size, burning degree, and so on. In addition, once fires have been detected by these detectors, they usually have become very large. That is, these detectors usually fail in initial combustion and, thus, result in greater damages.
Since Healey et al. presented their novel fire detection system based on image approach in 1993, many image-based fire detection methods have been proposed because images can provide more information as discussed previously, such as location, size, and burning degree. Healey et al. presented an aircraft jet fuel fire detection system using a color video as input on a predivided scene to approximately locate fires and estimate the flame sizes by detecting the grids that appear in video frames. Noda and Ueda (1994) used infrared images to detect tunnel fires by comparing the normal intensity histograms of various tunnel fire images with that of a captured image to determine whether there exists an obvious intensity variation. Foo (2000) applied statistic measures, such as first moment, standard deviation, and mean, to estimate flame sizes of aircraft dry bays and engine compartments using gray-level images by analyzing the relationships of the intensity changes between different fire burning degrees from a set of test flame images. The above two approaches were designed for specific environments and could not detect spurious flames because they use gray-level images as inputs. Yamagishi and Yamaguchi (1999) employed a neural network to determine fires from Fourier transforms of fire contours extracted from color images based on the HSV color space. Later, Phillips et al. (2002) proposed a sophisticated method for recognizing flames in color video by considering the temporal variation of flames. In 2004, Liu and Ahuja employed Gaussian distribution to model the fire colors and Fourier transforms to describe the fire contours for fire detection. Recently, Toreyin et al. (2005) used a Hidden Markov model to describe the fire burning states. They also employed a wavelet method to estimate the flame variations on spatial and temporal. These methods provide more precise ways to distinguish between real fires and spurious fires. However, they are too complex for real-time applications. In addition, they do not provide the detection of initial combustion. In order to have an image-based flame detection monitoring system to be of practical use so that users would be informed with a fire alarm as early as possible and they could view the flame images on displays, the system must work in real-time and must be applied to more general environments instead of being restricted to some specific situations.