Loading [MathJax]/jax/output/CommonHTML/jax.js
Home A Flame Detection Method Based on Novel Gradient Features
Article Open Access

A Flame Detection Method Based on Novel Gradient Features

  • Zhu Liping , Li Hongqi EMAIL logo , Wang Fenghui , Lv Jie , Sikandar Ali and Zhang Hong
Published/Copyright: July 17, 2018
Become an author with De Gruyter Brill

Abstract

In this study, we present a novel approach to efficiently detect the flame in multiple scenes in an image. The method uses a set of parametric representation named as Gradient Features (GF), to learn the features of flame color changes in the image. Different from the traditional color features of the flame, GF represents the color changes in RGB channels for further consideration. In this study, support vector machine was applied to generate a set of candidate regions and the decision tree model was used to judge flame regions based on GF. Some exclusive experiments were conducted to verify the validity and effectiveness of the proposed method. The results showed that the proposed method can accurately differentiate between yellow color light and sunrise scenes. A comparison with the state-of-the-art preceding methods showed that this method can utilize the symmetry of flame regions and achieve a better result.

1 Introduction

Flame detection has drawn increasing attention from the research community, owing to its numerous potential applications in public safety affairs in the form of video analytics and others. The major task of flame detection is to detect the flame from static pictures in order to cope with fire accidents in the real world. Compared to normal object detection tasks, it is substantially more challenging, as a flame has no certain shape.

Over the past several years, advances in machine learning have led to remarkable progress in object detection and recognition. These existing approaches consist of two main parts: one part is used to obtain the candidate region of a flame based on some color model, while the other part is used to check whether the candidate region is a flame region or not by using some features such as edge feature [10], color feature [2, 6, 12, 13, 15], movement feature [6], and area feature [4]. Some commonly used models incorporate the background subtraction algorithm, Markov model, statistical analysis, cumulative matrix, and frequency analysis. According to the literature [1, 2, 4, 6, 12, 13, 14, 15, 17], the accuracy of flame detection has been significantly improved recently. Nevertheless, the performances of flame detection methods remain unsatisfactory.

In this paper, an efficient method for flame detection with state-of-the-art accuracy in complex environments is proposed. The proposed method firstly nominates the candidate region of the flame based on color features, by checking if the pixel color is a flame color or not. Moreover, the statistical data show that there is a big difference between the flame region and the non-flame region in the changing of pixel values. Then, the Gradient Features (GF) representation is extracted to represent these changes. Specifically, the GF here is the mean variation of color gradients in different channels, and some statistical features based on the gradients are extracted to represent these changes. Furthermore, principal component analysis (PCA) is applied to decrease the feature dimension, and a decision model is trained to decide whether the region is a flame region or not. The experiments demonstrate that these representations of detection can achieve high-quality results in flame detection. The major contribution of this work is the development of a framework based on GF, a novel feature representation method for detecting flame in pictures.

2 Related Works

The typical approaches to flame detection consist of a probabilistic model, a background subtraction model, and a Markov model [1, 2, 14, 17], in which edge, color, and shape are the most common features, respectively.

Gomes et al. [6] applied a method to track a moving object and classify it based on the result of decreasing false ratio. The work of Toreyin et al. [13] was based on the dynamic background subtraction model, which is a fast-moving object detection algorithm. However, this kind of method depends solely on the extraction of the background image and is sensitive to obtrusive objects, such as floating leaves and moving red objects. Therefore, the dynamic feature of the flame is shaking instead of translating the whole object.

Dukuzumuremyi et al. [4] employed a method called stereo-based object detection to extract the foreground object and applied the wavelet transform method to generate features. Moreover, fuzzy-neural networks were used to judge whether the candidate region is a flame region or not.

Borges and Izquierdo [1] assumed that the pixel values of RGB channels follow an independent Gaussian distribution, and they compared the possibility of a flame to the highest possibility (peak of the histogram) of Gaussian distribution and tried to decide whether the region is a flame region or not. Zhang et al. [17] modified Borges and Izquierdo’s method and considered the possibility of a flame region or not. In the second part of their method, they used changes in the area to judge the region of flame. However, these two methods are not convincible, as they assumed that the pixel value of RGB follows an independent Gaussian distribution. Obviously, compared to Gaussian distribution, multivariate normal distribution is more suitable in the real situation.

Lin et al. [9] also took advantage of color rules of RGB and hue intensity and saturation color space to quickly check a candidate flame region. However, due to difficulties in choosing a threshold of the rule, this method could not be applied in all situations.

Khatami et al. [8] used a combination of particle swarm optimization and the K-Medoids method to construct an optimum linear transformation matrix. They transformed the GRB color space to a new space, where the flame color is concentrated in a line, which makes it easier for the later judgment of the flame region.

Zhao et al. [18] applied a simple color rule and flame motion feature to roughly extract candidate flame regions and used the K-Singular Value Decomposition algorithm to achieve an overcomplete dictionary.

As the developments in deep learning methods increase, a recent work [16] incorporated deep neural networks into the detection frameworks and obtained improved performance. However, the deep learning method could not work when the region of flame is widely distributed in spatial space.

In this paper, we propose a statistical method named GF, which aims to capture the changes of color pixel in RGB channels. Exclusive experiments showed that our method outperforms the state-of-art techniques up to some extent.

3 Proposed Method

3.1 Proposed System

The proposed framework takes fire flame pixels in RGB space as input and the two-dimensional locations of the flame region in the image as output. Figure 1 illustrates the whole operational process.

Figure 1: An Overview of the Structured Flame Detection Framework.
Figure 1:

An Overview of the Structured Flame Detection Framework.

As shown in Figure 1, the whole process consists of two main stages: the first stage is to obtain candidate flame regions by applying support vector machine (SVM) model-based labeled pixels to test the image, and the second stage is to judge the candidate region by applying the decision tree model that was trained on reduced GF.

In the first stage of the framework, the SVM model is trained on manually collected fire flame pixels in RGB space. Then, the model is applied to find flame pixels in the image, and these flame pixels are used to build a connected region that will be marked as candidate flame regions. The candidate flame region is achieved by using a flood-fill algorithm based on the recognized flame pixels. Moreover, a decision tree model is trained to classify those candidate regions into two categories: flame region and non-flame region. In the next step, the GF is extracted from the candidate regions and some data pre-processing method such as PCA is applied to the extracted GF in order to reduce the feature dimension.

The second stage of the framework is used to judge whether the candidate region is a real flame region or a non-flame region. The real data show that there is a remarkable difference between the flame region and the non-flame region. Based on the candidate region of flame, some symmetric features such as changes of color values in eight directions are extracted. The mean and standard deviation of gradient values following all the eight directions of the region are computed. Specifically, 10 statistical values are extracted from the eight beams of fire, and the PCA method is applied to reduce the feature dimensions to six. Moreover, a decision tree model is then built on the reduced features to classify the region into a flame region or a non-flame region.

3.2 Candidate Flame Region Identification

To obtain candidate flame regions, our method traverses all pixels in an image and classify those pixels into flame or non-flame using one-class SVM. Indeed, a flood-fill algorithm is applied to organize flame pixels into a connected region named as candidate regions. Figure 2 illustrates the training process of this one-class SVM.

Figure 2: Training Process of SVM.
Figure 2:

Training Process of SVM.

3.2.1 Flame Color Features Extraction

There are many manual settings of flame features, such as contour features, motion features, and flame color features. Due to the irregularity of the flame contour, especially when facing blowing wind, judging flame by its contour is a difficult task. Human beings mostly use the color feature of the flame to distinguish flame in real life. As the distribution of a color value of flame pixels is very wide, this paper mainly focuses on red and yellow flames.

A widely used classifier, SVM [3], is applied for checking the category of a pixel roughly. Ho [7] applied SVM instead of the fuzzy logic threshold approach and obtained a good result. Different from Ho’s work, we used one-class SVM to classify a pixel instead of two-class SVM in order to achieve better performance.

First, multiple flame images, as shown in Figure 3, were collected and the color of RGB channels of each flame pixel was extracted to construct the flame color space. Moreover, each flame pixel of the flame region was extracted and organized as a group, and the finally collected flame color space consisted of 95,619 samples and their distribution is shown in Figure 4.

Figure 3: Marked Flame Pixels.
Figure 3:

Marked Flame Pixels.

Figure 4: Distribution of Pixel Color Value in RGB Space.
Figure 4:

Distribution of Pixel Color Value in RGB Space.

As shown in Figure 4, the pixel value of the R channel was distributed more widely compared with the other two channels in flame images.

3.2.2 Candidate Flame Region Generation

SVM is a widely used classification algorithm; it enables non-linear mapping into high-dimensional feature space without the associated computational cost based on the usage of the kernel method. In this study, a widely used kernel, the radial basis function (RBF) kernel, was applied in experiments. Moreover, a grid search method was utilized to help choose the optimum parameters. The chosen parameters of the SVM model are presented in Table 1.

Table 1:

Parameters and Evaluation Results of Trained SVM Models.

Train Error Test Error Outlier Error ν γ
RBF 0.10 0.10 0.18 0.10 0.90

The 10-cross validation method was applied to evaluate the models. When each pixel of the image was categorized into flame or no flame, the flood-fill algorithm was applied to organize these flame pixels into connected regions, as shown in Figure 5.

Figure 5: Candidate Region Generation Based on the SVM Model.
Figure 5:

Candidate Region Generation Based on the SVM Model.

The trained SVM tries to find the best possible candidate flame region. However, some flame-like regions are also viewed as flame regions due to its highly like flame in RGB color space. Some examples such as white smoke are shown in Figure 5. Therefore, it is necessary for further judgment of the candidate flame region to apply other models based on more representative features.

3.3 Judgment of a Flame Region

In real life, judging a flame only by its pixel color is prone to errors, such as with moving maple leaves. Flying maple leaves own the same color feature and moving feature as a flame, and therefore it adds to the recognition difficulty of distinguishing it from the flame. Moreover, as the night light owns the same color and symmetry region feature as the flame does, it is also difficult to differentiate it from real flame, if the background is ignored.

However, the change in pixel values is different between a flame and a non-flame object. Specifically, in flame regions, the red channel changes slowly, the blue channel changes very fast, and the changes of the green channel are in between.

Therefore, in this study, we took into consideration the pixel changes of flame and non-flame regions. The extracted features are supposed to represent the variation of pixel colors as in the real-world object and to obey a certain rule of nature. These rules state that the changes of pixel color value are a slow and gradual process. Moreover, the changing process of these pixel color values is different in the three channels.

The gradient computation for each pixel is shown in Figure 6.

Figure 6: Pixel Value of the Image in Spatial Space.
Figure 6:

Pixel Value of the Image in Spatial Space.

As shown in Figure 6, there are eight adjacent pixels for the center pixel P(x,y). We only computed the gradient values of the eight directions. One of the direct gradient computation is shown as a red arrow in Figure 6 and the formula is given below; it means the direction from P(x,y) to P(x − 1, y + 1).

G=pixel(x,y)pixel(x1,y+1).

As a result, following the idea of pixel changes in the three channels, the gradient pixel color features, named GF, are designed to check whether a candidate region is a flame region or not. With the aim to obtain GF for the candidate flame region, we computed GF in eight directions instead of the whole region, as shown in Figure 7.

Figure 7: Region in the Red Frame is a Candidate Region.
Figure 7:

Region in the Red Frame is a Candidate Region.

There are two reasons for computing the gradient in eight directions: to reduce the computing complexity and to obtain more representative gradient values as the flame region is a radially distributed region.

Figure 8 illustrates the changes of pixel value following one direction for flame images and flame-like images in RGB color spaces.

Figure 8: Changing Rate of the Pixel Value for Flame and Non-flame Regions.
Figure 8:

Changing Rate of the Pixel Value for Flame and Non-flame Regions.

As shown in Figure 8, the changes of pixel value in the R channel are much slower compared with the other two channels. This rule cannot be found in red clothes or white smoke, which can be mistaken for flame in the color feature space. The difference in the pixel value changing rate in the three channels could be represented as a ratio of the average gradient of the R channel to the average gradient of the G channel.

Figure 9 demonstrates the distribution range of the flame and non-flame regions.

Figure 9: Pixel Value Gradient Distribution Range for Flame and Non-flame Regions.
Figure 9:

Pixel Value Gradient Distribution Range for Flame and Non-flame Regions.

Figure 9 shows the gradient range of real fire, red clothes, and white smoke. It is obvious that the gradient range of real fire is much wider compared with other flame-like objects. From Figure 9, we can see that the pixel value of the flame region has a wider range compared with the flame-like region, which could be represented as variations of the gradient in eight directions.

After calculating the distribution of color values in the RGB channels, the ratio of the mean probability density to the peak value was calculated based on probability density function [17].

In this study, we not only took into consideration the pixels of a flame but also calculated the pixel changes of a flame. As a result, the features that can reflect the changes of flame color and the association of RGB channels were designed based on the formulas defined as follows:

(1) ¯|ΔR|=1NNn=1|ΔRn|,
(2) σ|ΔR|=1NNn=1(|ΔRn|¯|ΔR|)2,
(3) ¯|ΔG|=1NNn=1|ΔGn|,
(4) σ|ΔG|=1NNn=1(|ΔGn|¯|ΔG|)2,
(5) ¯|ΔB|=1NNn=1|ΔBn|,
(6) σ|ΔB|=1NNn=1(|ΔBn|¯|ΔB|)2,
(7) ¯|ΔGΔR|=1NNn=1|ΔGnΔRn|,
(8) σ|ΔGΔR|=1NNn=1(|ΔGnΔRn|¯|ΔGΔR|)2,
(9) ¯|ΔBΔR|=1NNn=1|ΔBnΔRn|,
(10) σ|ΔBΔR|=1NNn=1(|ΔBnΔRn|¯|ΔBΔR|)2,

where ΔRn is the amount of change in the direction n in the R channel, ¯|ΔR| is the mean of ΔRn, σ|ΔR| is the standard deviation of ΔRn, ¯|ΔG|/|ΔR| is the mean of the ratio between the amount of change in channel G and channel R, and σ|ΔG|/|ΔR| is the standard deviation of ¯|ΔG|/|ΔR|. There are a total of 10 dimensions of the GF.

The decision tree algorithm was applied to the above-mentioned original features to classify the candidate region directly. However, poor experimental results were obtained based on the 10-dimension of these features. To achieve a better performance, we reduced the dimension of those features using PCA and generated six eigenvalues and eigenvectors [5]. Figure 10 shows the reduced feature for the flame region and non-flame region.

Figure 10: Values of GF.
Green represents features from flame regions; yellow represents features from non-flame regions.
(A), (B), and (C) are from different image samples.
Figure 10:

Values of GF.

Green represents features from flame regions; yellow represents features from non-flame regions.

(A), (B), and (C) are from different image samples.

Table 2 summarizes the outcomes of our experiments. The extracted six eigenvalues accounted for 99.5% of the sum of all eigenvalues. Finally, we judged the color change sequence if it belongs to the flame, by using the decision tree model.

Table 2:

Eigenvalue and Eigenvector.

Eigenvalue Eigenvector
1094.98 0.2518 0.2693 0.2478 −0.0028 0.0056 0.4919 0.4932 0.5629 −0.0068 0.0305
272.75 0.1255 −0.0388 −0.3120 0.0102 0.0607 0.5853 0.2639 −0.6540 0.0099 0.2039
88.54 0.4230 0.5658 0.3051 0.0423 0.1572 −0.3997 0.0545 −0.3116 0.0846 0.3382
76.91 −0.2332 −0.2263 −0.2330 0.0475 0.2351 −0.1433 0.1304 0.2811 0.0913 0.8110
35.40 −0.3484 0.1580 −0.2405 −0.0238 −0.2518 −0.3990 0.7447 −0.1043 −0.0427 −0.2517
13.65 −0.4950 0.0576 0.5543 −0.2525 −0.2464 0.1333 0.0091 −0.1928 −0.4451 0.2583

Figure 11 illustrates the final decision tree models trained in our experiments.

Figure 11: Final Decision Tree in Our Experiments.
Figure 11:

Final Decision Tree in Our Experiments.

It is clear from Figure 11 that each node is the decision tree node. It means we split the decision space into two different regions by comparing it with the feature value. Take the first node as an example, where the split feature is x3 and the split value is −1.3096.

4 Experiments

4.1 Data Description

The 500 sample flame images used in the experiment were taken from the Internet and consisted of pictures of forest fires, bonfire, candlelight, lights, traffic fires, urban fires, etc. These images can be divided into nighttime flame images and daytime flame images according to their background scene. Therefore, this image set can validate the generalization of the algorithm.

4.2 Choice of the Center Point for a Candidate Flame Region

In Figures 1315, the red rectangle represents the candidate region as a flame region and the blue rectangle represents the candidate region as a non-flame region. In the experiment, it is difficult to find the center of a flame, which is mostly identified by the eyes. To find the center of the flame as soon as possible, we hypothesized that the observed flames are convex polygons. In this case, the centroid of the outline can be considered as the center of a flame, which can be calculated by using the cv2.moment module in OpenCV.

However, it is difficult to ensure that the centroid of the candidate fire region contour is the center of the flame. As the centroid of the flame is used only for extracting the GF of the flame region, there is no need to find the geometrical center of the flame precisely. As shown in Figure 12, the centroid of the region is not the geometrical center of the flame.

Figure 12: Sample Images Whose Centroids are not in the Candidate Region.
Figure 12:

Sample Images Whose Centroids are not in the Candidate Region.

4.3 Sample Analysis

It is difficult to identify the flame according to the change of the flame color exclusively. In the dark night, for example, when a flame or light is the main light source, nearby objects can be mapped to the corresponding color, such as the ground or people. However, it is relatively easy to identify the flame if the area is only a flame or a non-flame object. As shown in Figure 13, the outline of the picture is the area identified by SVM in the first step.

Figure 13: Region Whose Color is Mapped to Flame Color.
Figure 13:

Region Whose Color is Mapped to Flame Color.

Better results can be achieved if the flame area does not overlap with the non-flame area. However, in a real-world environment, as shown in Figure 14, the flame area and the non-flame area, whose color is similar to the flame, are intertwined.

Figure 14: Candidate Regions that Contain Flame or Non-flame.
Figure 14:

Candidate Regions that Contain Flame or Non-flame.

Rigorously speaking, this kind of area is not a flame area because there is no flame in this area. However, in practical applications, a system should be timely alarmed if flame appears in the area, which is likely to be misinterpreted.

Moreover, if pixels of the image are low, lights at night are recognized (misinterpreted) as a flame, as shown in Figure 15. There is no better solution to this problem; it is yet to be improved.

Figure 15: Lights that are Misinterpreted as Flame.
Figure 15:

Lights that are Misinterpreted as Flame.

4.4 Experimental Results

Some of the test samples did not contain flames but may contain objects that are easily misinterpreted as a flame, such as smoke and fog. Only flames were detected in this experiment, and smoke and fog were not detected. The sample images were crawled from the Web and may occasionally be repeated. As a comparison, the model with the highest accuracy in flames identification presented by Khatami et al. [8] in 2017 was used as the baseline. Meanwhile, as a popular deep learning method, a comparison with the regular target detection method, Single Shot Multibox Detector (SSD) [11], and the results obtained by Khatami et al. on the data set are shown in Table 3.

Table 3:

Statistical Results.

Proposed Method
Khatami et al. [8]
SSD [11]
Precision Recall Precision Recall Precision Recall
92.36% 96.64% 88.98% 96.02% 90.98% 96.12%

Table 3 presents the detailed data comparison based on a sample of 500 pictures. Two widely used evaluation metrics were applied in our experiments: precision and recall.

Compared to the method of Khatami et al. [8] and the SSD [11], the method presented in this paper had a better performance in the suspected flame area, such as smoke and other white objects such as candlelight mapping, the candle itself, red and yellow clothes, etc.

As is shown in Table 3, we achieved a better recall ratio and precision ratio compared to the other two methods due to the use of GF. Some successfully discriminated samples by our method are shown in Figure 16, which were recognized as a flame by the other two methods.

Figure 16: Some Successfully Discriminated Samples by Our Method.
Figure 16:

Some Successfully Discriminated Samples by Our Method.

5 Conclusions

The proposed method consists of two main steps. The first step is to find the candidate flame region by applying a pre-trained classifier that aims to check whether a pixel is a flame or not. Each pixel of a picture will own a label of being flame or not. Based on the above results, a method of finding contour is applied to obtain the candidate region of flame. The second step is to check if the region is a flame or other things that look like a flame by extracting the GF.

The features of reflecting the pixel variations of the candidate flame region are extracted to differentiate the flame region and regions like a flame. The principle behind this is the natural changes of the pixel of the fire region, which will show too strong or too mild changes for other flame-like objects such as smoke, fog, and red clothes. Experiments were conducted on some sample pictures. The experimental results demonstrated that the proposed method can effectively distinguish the flame region from other flame-like regions.

In the first step of our proposed method, it is necessary to check all the pixels of a picture; therefore, the speed is relatively slow. In the future, the way of checking each pixel in the first step will be improved to accelerate the whole process.

Bibliography

[1] P. V. Borges and E. Izquierdo, A probabilistic approach for vision-based fire detection in videos, IEEE Trans. Circuits Syst. Video Technol. 20 (2010), 721–731.10.1109/TCSVT.2010.2045813Search in Google Scholar

[2] A. E. Cetin, K. Dimitropoulos, B. Gouverneur, N. Grammalidis, O. Gunay, Y. H. Habiboglu, B. U. Töreyin and S. Verstockt, Video fire detection – review, Digital Signal Process. 23 (2013), 1827–1843.10.1016/j.dsp.2013.07.003Search in Google Scholar

[3] C. Cortes and V. Vapnik, Support-vector networks, Mach. Learn. 20 (1995), 273–297.10.1007/BF00994018Search in Google Scholar

[4] J. P. Dukuzumuremyi, B. Zou and D. Hanyurwimfura, A novel algorithm for fire/smoke detection based on computer vision, Int. J. Hybrid Inform. Technol. 7 (2014), 143–154.10.14257/ijhit.2014.7.3.15Search in Google Scholar

[5] Eigenvalues and eigenvectors, Available at https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors, Accessed 17 October, 2017.Search in Google Scholar

[6] P. Gomes, P. Santana and J. Barata, A vision-based approach to fire detection, Int. J. Adv. Robot. Syst. 11 (2014), 565–575.10.5772/58821Search in Google Scholar

[7] C. C. Ho, Nighttime fire/smoke detection system based on a support vector machine, Math. Probl. Eng. 2013 (2013), 532–548.10.1155/2013/428545Search in Google Scholar

[8] A. Khatami, S. Mirghasemi, A. Khosravi, C. P. Lim and S. Nahavandi, A new PSO-based approach to fire flame detection using K-Medoids clustering, Expert Syst. Appl. 68 (2017), 69–80.10.1016/j.eswa.2016.09.021Search in Google Scholar

[9] M. X. Lin, W. L. Chen, B. S. Liu and L. N. Hao, An intelligent fire-detection method based on image processing, Adv. Eng. Forum 2–3 (2011), 172–175.10.4028/www.scientific.net/AEF.2-3.172Search in Google Scholar

[10] C. Liu and N. Ahuja, Vision based fire detection, in: International Conference on Pattern Recognition, vol. 4, pp. 134–137, Institute of Electrical and Electronics Engineers Inc., Cambridge, UK, 2004.Search in Google Scholar

[11] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu and A. C. Berg, SSD: Single Shot MultiBox Detector, in: European Conference on Computer Vision, pp. 21–37, Springer Verlag, Amsterdam, The Netherlands, 2016.10.1007/978-3-319-46448-0_2Search in Google Scholar

[12] J. Rong, D. Zhou, W. Yao, J. Chen and J. Wang, Fire flame detection based on GICA and target tracking, Opt. Laser Technol. 47 (2013), 283–291.10.1016/j.optlastec.2012.08.040Search in Google Scholar

[13] B. U. Toreyin, Y. Dedeoglu, U. Gudukbay and A. E. Cetin, Computer vision based method for real-time fire and flame detection, Pattern Recognit. Lett. 27 (2006), 49–58.10.1016/j.patrec.2005.06.015Search in Google Scholar

[14] S. Verstockt, A. Vanoosthuyse, S. V. Hoecke, P. Lambert and R. V. de Walle, Multi-sensor fire detection by fusing visual and non-visual flame features, in: International Conference on Image and Signal Processing, pp. 333–341, Springer-Verlag, 2010.10.1007/978-3-642-13681-8_39Search in Google Scholar

[15] D. Wang, X. Cui, E. Park, C. Jin and H. Kim, Adaptive flame detection using randomness testing and robust features, Fire Saf. J. 55 (2013), 116–125.10.1016/j.firesaf.2012.10.011Search in Google Scholar

[16] B. H. Yang, Z. Dong, Y. H. Zhang and X. M. Zheng, Recognition of fire detection based on neural network, in: International Conference on Life System Modeling and Simulation and Intelligent Computing, and 2010 International Conference on Intelligent Computing for Sustainable Energy and Environment, pp. 250–258, Springer-Verlag, Wuxi, China, 2010.Search in Google Scholar

[17] Z. Zhang, T. Shen and J. Zou, An improved probabilistic approach for fire detection in videos, Fire Technol. 50 (2014), 745–752.10.1007/s10694-012-0253-1Search in Google Scholar

[18] Y. Zhao, G. Tang and M. Xu, Hierarchical detection of wildfire flame video from pixel level to semantic level, Expert Syst. Appl. 42 (2015), 4097–4104.10.1016/j.eswa.2015.01.018Search in Google Scholar

Received: 2017-10-31
Published Online: 2018-07-17

©2020 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 Public License.

Articles in the same Issue

  1. An Optimized K-Harmonic Means Algorithm Combined with Modified Particle Swarm Optimization and Cuckoo Search Algorithm
  2. Texture Feature Extraction Using Intuitionistic Fuzzy Local Binary Pattern
  3. Leaf Disease Segmentation From Agricultural Images via Hybridization of Active Contour Model and OFA
  4. Deadline Constrained Task Scheduling Method Using a Combination of Center-Based Genetic Algorithm and Group Search Optimization
  5. Efficient Classification of DDoS Attacks Using an Ensemble Feature Selection Algorithm
  6. Distributed Multi-agent Bidding-Based Approach for the Collaborative Mapping of Unknown Indoor Environments by a Homogeneous Mobile Robot Team
  7. An Efficient Technique for Three-Dimensional Image Visualization Through Two-Dimensional Images for Medical Data
  8. Combined Multi-Agent Method to Control Inter-Department Common Events Collision for University Courses Timetabling
  9. An Improved Particle Swarm Optimization Algorithm for Global Multidimensional Optimization
  10. A Kernel Probabilistic Model for Semi-supervised Co-clustering Ensemble
  11. Pythagorean Hesitant Fuzzy Information Aggregation and Their Application to Multi-Attribute Group Decision-Making Problems
  12. Using an Efficient Optimal Classifier for Soil Classification in Spatial Data Mining Over Big Data
  13. A Bayesian Multiresolution Approach for Noise Removal in Medical Magnetic Resonance Images
  14. Gbest-Guided Artificial Bee Colony Optimization Algorithm-Based Optimal Incorporation of Shunt Capacitors in Distribution Networks under Load Growth
  15. Graded Soft Expert Set as a Generalization of Hesitant Fuzzy Set
  16. Universal Liver Extraction Algorithm: An Improved Chan–Vese Model
  17. Software Effort Estimation Using Modified Fuzzy C Means Clustering and Hybrid ABC-MCS Optimization in Neural Network
  18. Handwritten Indic Script Recognition Based on the Dempster–Shafer Theory of Evidence
  19. An Integrated Intuitionistic Fuzzy AHP and TOPSIS Approach to Evaluation of Outsource Manufacturers
  20. Automatically Assess Day Similarity Using Visual Lifelogs
  21. A Novel Bio-Inspired Algorithm Based on Social Spiders for Improving Performance and Efficiency of Data Clustering
  22. Discriminative Training Using Noise Robust Integrated Features and Refined HMM Modeling
  23. Self-Adaptive Mussels Wandering Optimization Algorithm with Application for Artificial Neural Network Training
  24. A Framework for Image Alignment of TerraSAR-X Images Using Fractional Derivatives and View Synthesis Approach
  25. Intelligent Systems for Structural Damage Assessment
  26. Some Interval-Valued Pythagorean Fuzzy Einstein Weighted Averaging Aggregation Operators and Their Application to Group Decision Making
  27. Fuzzy Adaptive Genetic Algorithm for Improving the Solution of Industrial Optimization Problems
  28. Approach to Multiple Attribute Group Decision Making Based on Hesitant Fuzzy Linguistic Aggregation Operators
  29. Cubic Ordered Weighted Distance Operator and Application in Group Decision-Making
  30. Fault Signal Recognition in Power Distribution System using Deep Belief Network
  31. Selector: PSO as Model Selector for Dual-Stage Diabetes Network
  32. Oppositional Gravitational Search Algorithm and Artificial Neural Network-based Classification of Kidney Images
  33. Improving Image Search through MKFCM Clustering Strategy-Based Re-ranking Measure
  34. Sparse Decomposition Technique for Segmentation and Compression of Compound Images
  35. Automatic Genetic Fuzzy c-Means
  36. Harmony Search Algorithm for Patient Admission Scheduling Problem
  37. Speech Signal Compression Algorithm Based on the JPEG Technique
  38. i-Vector-Based Speaker Verification on Limited Data Using Fusion Techniques
  39. Prediction of User Future Request Utilizing the Combination of Both ANN and FCM in Web Page Recommendation
  40. Presentation of ACT/R-RBF Hybrid Architecture to Develop Decision Making in Continuous and Non-continuous Data
  41. An Overview of Segmentation Algorithms for the Analysis of Anomalies on Medical Images
  42. Blind Restoration Algorithm Using Residual Measures for Motion-Blurred Noisy Images
  43. Extreme Learning Machine for Credit Risk Analysis
  44. A Genetic Algorithm Approach for Group Recommender System Based on Partial Rankings
  45. Improvements in Spoken Query System to Access the Agricultural Commodity Prices and Weather Information in Kannada Language/Dialects
  46. A One-Pass Approach for Slope and Slant Estimation of Tri-Script Handwritten Words
  47. Secure Communication through MultiAgent System-Based Diabetes Diagnosing and Classification
  48. Development of a Two-Stage Segmentation-Based Word Searching Method for Handwritten Document Images
  49. Pythagorean Fuzzy Einstein Hybrid Averaging Aggregation Operator and its Application to Multiple-Attribute Group Decision Making
  50. Ensembles of Text and Time-Series Models for Automatic Generation of Financial Trading Signals from Social Media Content
  51. A Flame Detection Method Based on Novel Gradient Features
  52. Modeling and Optimization of a Liquid Flow Process using an Artificial Neural Network-Based Flower Pollination Algorithm
  53. Spectral Graph-based Features for Recognition of Handwritten Characters: A Case Study on Handwritten Devanagari Numerals
  54. A Grey Wolf Optimizer for Text Document Clustering
  55. Classification of Masses in Digital Mammograms Using the Genetic Ensemble Method
  56. A Hybrid Grey Wolf Optimiser Algorithm for Solving Time Series Classification Problems
  57. Gray Method for Multiple Attribute Decision Making with Incomplete Weight Information under the Pythagorean Fuzzy Setting
  58. Multi-Agent System Based on the Extreme Learning Machine and Fuzzy Control for Intelligent Energy Management in Microgrid
  59. Deep CNN Combined With Relevance Feedback for Trademark Image Retrieval
  60. Cognitively Motivated Query Abstraction Model Based on Associative Root-Pattern Networks
  61. Improved Adaptive Neuro-Fuzzy Inference System Using Gray Wolf Optimization: A Case Study in Predicting Biochar Yield
  62. Predict Forex Trend via Convolutional Neural Networks
  63. Optimizing Integrated Features for Hindi Automatic Speech Recognition System
  64. A Novel Weakest t-norm based Fuzzy Fault Tree Analysis Through Qualitative Data Processing and Its Application in System Reliability Evaluation
  65. FCNB: Fuzzy Correlative Naive Bayes Classifier with MapReduce Framework for Big Data Classification
  66. A Modified Jaya Algorithm for Mixed-Variable Optimization Problems
  67. An Improved Robust Fuzzy Algorithm for Unsupervised Learning
  68. Hybridizing the Cuckoo Search Algorithm with Different Mutation Operators for Numerical Optimization Problems
  69. An Efficient Lossless ROI Image Compression Using Wavelet-Based Modified Region Growing Algorithm
  70. Predicting Automatic Trigger Speed for Vehicle-Activated Signs
  71. Group Recommender Systems – An Evolutionary Approach Based on Multi-expert System for Consensus
  72. Enriching Documents by Linking Salient Entities and Lexical-Semantic Expansion
  73. A New Feature Selection Method for Sentiment Analysis in Short Text
  74. Optimizing Software Modularity with Minimum Possible Variations
  75. Optimizing the Self-Organizing Team Size Using a Genetic Algorithm in Agile Practices
  76. Aspect-Oriented Sentiment Analysis: A Topic Modeling-Powered Approach
  77. Feature Pair Index Graph for Clustering
  78. Tangramob: An Agent-Based Simulation Framework for Validating Urban Smart Mobility Solutions
  79. A New Algorithm Based on Magic Square and a Novel Chaotic System for Image Encryption
  80. Video Steganography Using Knight Tour Algorithm and LSB Method for Encrypted Data
  81. Clay-Based Brick Porosity Estimation Using Image Processing Techniques
  82. AGCS Technique to Improve the Performance of Neural Networks
  83. A Color Image Encryption Technique Based on Bit-Level Permutation and Alternate Logistic Maps
  84. A Hybrid of Deep CNN and Bidirectional LSTM for Automatic Speech Recognition
  85. Database Creation and Dialect-Wise Comparative Analysis of Prosodic Features for Punjabi Language
  86. Trapezoidal Linguistic Cubic Fuzzy TOPSIS Method and Application in a Group Decision Making Program
  87. Histopathological Image Segmentation Using Modified Kernel-Based Fuzzy C-Means and Edge Bridge and Fill Technique
  88. Proximal Support Vector Machine-Based Hybrid Approach for Edge Detection in Noisy Images
  89. Early Detection of Parkinson’s Disease by Using SPECT Imaging and Biomarkers
  90. Image Compression Based on Block SVD Power Method
  91. Noise Reduction Using Modified Wiener Filter in Digital Hearing Aid for Speech Signal Enhancement
  92. Secure Fingerprint Authentication Using Deep Learning and Minutiae Verification
  93. The Use of Natural Language Processing Approach for Converting Pseudo Code to C# Code
  94. Non-word Attributes’ Efficiency in Text Mining Authorship Prediction
  95. Design and Evaluation of Outlier Detection Based on Semantic Condensed Nearest Neighbor
  96. An Efficient Quality Inspection of Food Products Using Neural Network Classification
  97. Opposition Intensity-Based Cuckoo Search Algorithm for Data Privacy Preservation
  98. M-HMOGA: A New Multi-Objective Feature Selection Algorithm for Handwritten Numeral Classification
  99. Analogy-Based Approaches to Improve Software Project Effort Estimation Accuracy
  100. Linear Regression Supporting Vector Machine and Hybrid LOG Filter-Based Image Restoration
  101. Fractional Fuzzy Clustering and Particle Whale Optimization-Based MapReduce Framework for Big Data Clustering
  102. Implementation of Improved Ship-Iceberg Classifier Using Deep Learning
  103. Hybrid Approach for Face Recognition from a Single Sample per Person by Combining VLC and GOM
  104. Polarity Analysis of Customer Reviews Based on Part-of-Speech Subcategory
  105. A 4D Trajectory Prediction Model Based on the BP Neural Network
  106. A Blind Medical Image Watermarking for Secure E-Healthcare Application Using Crypto-Watermarking System
  107. Discriminating Healthy Wheat Grains from Grains Infected with Fusarium graminearum Using Texture Characteristics of Image-Processing Technique, Discriminant Analysis, and Support Vector Machine Methods
  108. License Plate Recognition in Urban Road Based on Vehicle Tracking and Result Integration
  109. Binary Genetic Swarm Optimization: A Combination of GA and PSO for Feature Selection
  110. Enhanced Twitter Sentiment Analysis Using Hybrid Approach and by Accounting Local Contextual Semantic
  111. Cloud Security: LKM and Optimal Fuzzy System for Intrusion Detection in Cloud Environment
  112. Power Average Operators of Trapezoidal Cubic Fuzzy Numbers and Application to Multi-attribute Group Decision Making
Downloaded on 23.4.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2017-0562/html
Scroll to top button