Artículo Científico / Scientific Paper

pISSN: 1390-650X / eISSN: 1390-860X









Marco Flores-Calero1,2,*, Cristian Conlago3, Jhonny Yunda3,

Milton Aldás4, Carlos Flores5




This paper presents a system prototype for traffic sign detection (SDST) on-board a moving vehicle. Therefore, a new approach to the development of an SDST is presented, using the following innovations: i) an efficient method of color segmentation for regions of interest (ROIs) generation based on k-NN with, ii) a new version of the HOG descriptor for feature extraction and iii) SVM training for stage multi-classification. The proposed approach has been specialized and tested on a subset of Regulatory (Stop, Give-way and Velocity) Ecuadorian signs. Many experiments have been carried out in real driving conditions, under different lighting changes such as normal, sunny and cloudy. This system has showed a global performance of 98.7% for segmentation, 99.49% for classification and an accuracy of 96% for detection.

Este artículo presenta un prototipo de un sistema embarcado

en un vehículo para la detección de señales de tránsito (SDST). Por lo tanto, un nuevo enfoque para la construcción de un SDST se presenta usando las siguientes innovaciones, i) un método eficiente de segmentación por color para la generación de regiones de interés (ROI) basado en los algoritmos k NN, Kmmeans con, ii) una nueva versión del descriptor HOG para la extracción de características, y iii) el entrenamiento del algoritmo SVM no-lineal para multiclasificación. El enfoque propuesto ha sido probado sobre un subconjunto de las señales de tránsito ecuatorianas de regulación (Pare, Ceda el paso y Velocidad). Varios experimentos han sido desarrollados en condiciones reales de conducción en varias ciudades ecuatorianas, bajo tres condiciones de iluminación: normal, soleado y nublado. Este sistema ha mostrado

un desempeño global del 98,7 % para la segmentación, 99,49 % para la clasificación y una precisión global del 96 % en la detección.



Keywords: Accidents, Ecuador, HOG, k NN, Km means, SVM, Traffic sign, Stop, Give way, Velocity.

Palabras clave: Accidentes, Ecuador, HOG, k NN, Km means, señales de tránsito, SVM, Pase, Ceda el paso, Velocidad.

1,* Department of Electrics and Electronics, Universidad de las Fuerzas Armadas-ESPE. Sangolquí – Ecuador. Corresponding author :,

2 Departamento de Sistemas Inteligentes, Tecnologías I&H. Latacunga, Ecuador.

3 Electronic Engineering, Automation and Control Major, Universidad de las Fuerzas Armadas-ESPE.,

4 Faculty of Civil and Mechanical Engineering, Universidad Técnica de Ambato, Ambato – Ecuador.

5 Traffic Accident Investigation Service (SIAT), Policía Nacional del Ecuador, Latacunga – Ecuador.  


Received: 02-04-2018, accepted after review: 21-05-2018

Suggested citation: Flores-Calero, M.; Conlago, C.; Yunda, J.; Aldás, M. y Flores, C. (2018). «Implementation of an algorithm for Ecuadorian traffic sign detection: Stop, Give-way and Velocity cases». Ingenius. N.° 20, (july-december). pp. 9-20. doi:




1. Introduction

1.1. Notation

The notation used throughout this article is presented in Table 1.

Table 1. Notation

1.2. Motivation

The purpose of traffic signs is to help the orderly and safe movement of actors, allowing a continuous flow of both vehicle and pedestrian traffic. Each of these signals presents instructions, which provide information about routes, destinations, points of interest, prohibitions, alerts, etc. These signals must be respected

by all road users in order to avoid unexpected and unfortunate accidents, and above all, have a reliable and safe circulation [1]. The risk of an adult pedestrian dying after being hit by a car is less than 20% at a speed of 50 km/h, and about 60% at 80 km/h, so it is essential for drivers to take into account the speed established by traffic signs [2].

Currently, Ecuador has the best road network in South America [3]. This includes regulatory Stop, Giveway and Speed traffic signs at the intersections of roads, roundabouts and access points through secondary roads. Despite this important road infrastructure, Ecuador exceeds the death rate in traffic accidents by 3.14% with respect to the average of other Andean countries. Thus, traffic accidents are a constant problem, due to several critical factors, such as the imprudence of drivers when driving with excessive

speed and not respecting traffic signs [4]. In 2015, 13.75% of all traffic accidents happened at road intersections [5], generating 8.14% of deaths under this type of mishap. On the other hand, an adult pedestrian has less than a 20% chance of dying if he is struck by a car at less than 50 km/h, but almost a 60% risk of dying if they are hit at 80 km/h [2].

TSDSs are of increasing importance [6, 7] because they can help in the prevention and reduction of traffic accidents [8]. However, these systems are still far from perfect, and must be specialized by country, adapted to the particularities of the transit signage design of each nation [9].

Therefore, this research presents a TSDS specialized in three types of traffic signs from Ecuador, which are the Stop, Give-way and Speed signs. Being able to detect them is important because it allows the driver to be alerted that they will cross an area with a high potential for collision with another vehicle. In the case of the Stop disk, the driver must stop completely; in the case of Give-way, the driver must become vigilant, and in the case of Speed the driver must respect the speed limits of 50 km/h and 100 km/h in urban and motorway zones, respectively. The speed signal of 50 km/h is the most common daily limit in urban environments, and 100 km/h is the most common on motorways.

For the implementation of TSDS, modern techniques of computer vision and artificial intelligence have been used to cover all cases that arise while driving during the day, such as: variability of lighting, partial occlusion and deterioration of signals. The document is organized as follows: the second section corresponds to the previous works regarding the detection of traffic signs. Section three presents a new system for the detection of traffic signs for the case of the Ecuadorian traffic signs of Stop, Give-way and

Speed. Then, the next section shows the experimental results in real driving conditions. Finally, the last part is dedicated to conclusions and future work.

2. Materials and methods

2.1. Previous works

For the development of systems for automatic detection of traffic signs, the problem is usually divided into two parts, segmentation and recognition/classification [10].

a)  In the case of segmentation, one of the predominant characteristics in the visible spectrum, is color, where color spaces and different computer vision techniques have been used to generate regions with a high possibility of containing a traffic sign. Such is the case that most of the techniques based on color seek to be robust against the variations of lighting during the day, in different scenarios such as sunny, cloudy, etc. Thus, Salti et al. [11] have used three color spaces derived from RGB, the first to highlight traffic signs with predominance of blue and red colors, the second for signals with intense red and the third for bright blues. Li et al. [12] have constructed a space where the objects dominated by the blueyellow and green-red colors stand out, on which, using the K-means clustering algorithm [13] they construct a color classification method for the generation of ROI. Nguyen et al. [6] have used the HSV space with several thresholds to generate a set of ROI looking for red and blue colors. Lillo et al. [14] have used the L*a*b* spaces and HSI to detect signals where the colors red, white and yellow predominate, using the components a* and b* to build a classifier for these colors. Chen and Lu [15] have used multiresolution and AdaBoost techniques to merge two sources of information, visual and spatial localization; in the visual they construct two color spaces




based on RGB called outgoing color maps, in spatial they have used the gradient with different orientations. Finally, Han et al. [16] have used the H component of the HSI space, to generate an interval where the traffic signs stand out, and to construct a gray image where the ROIs are located. Villalón et al. [17] have implemented a filter using the normalized RGB color space, on which, by calculating statistical parameters, they have generated the red regions and thus have obtained the ROI.

b)  In the recognition/classification scenario, some methods have been used for the extraction of characteristics in conjunction with a learning machine algorithm [18–20], in order to classify and recognize the different types of signals. This stage is divided into two parts: i) method of extracting characteristics and, ii) choice of classification algorithm. In the first case there is a wide variety of proposals. Thus, Salti et al. [11], Huang et al. [21], Shi and Li [22] have used the descriptor HOG [23] with three variants specialized in traffic signs. Li et al. [12] have used the PHOG descriptor,

which is a variation of HOG in a pyramidal scheme. Lillo et al. [14] have implemented feature extraction using the discrete Fourier transform. Han et al. [16] have used the SURF method [24]. Chen and Lu [15] used iterative DSC for the generation of the feature vector. Mongoose et al. [9]

jointly implemented ICF and ACF to generate the characteristics. Pérez et al. [10] have used the PCA technique for the reduction of the dimension and the choice of dominant characteristics. Finally, Lau et al. [25] have used a weighting of neighboring pixels to highlight the characteristics of the object of interest. In the second question, the preferred algorithms are: SVM [13, 20], used in the works of Salti et al. [11], Li et al. [12], Lillo et al. [14] and Shi and Li [26]. SVR used in Chen and Lu [15], [20] implemented in the investigations of Han et al. [16] Artificial neural networks, used by Huang et al. [21] with the ELM case and Pérez et al. [10] with the MLP implementation.

Adaboost with decision trees used in the work of Mogelmose et al. [9] Villalón et al. [17] have developed a statistical template based on a probability-adjusted model on the normalized YCbCr and RGB spaces. In recent years, the techniques based on deep learning are gaining more importance, so much so that CNN and its variations are used for automatic classification, where the vector of characteristics is extracted without direct human intervention. Such is the case of the works of Lau et al. [25], Zhu et al. [27] and Zuo et al. [28]

c)  Regarding the traffic sign databases, it can be mentioned that each country has its own regulations in terms of signaling, divided into the categories of information, mandatory, prohibitive and warning [9,11,14,15,27]. At present, the main databases present in the bibliography correspond

to countries such as Germany [10, 21], Italy [11], Spain [14], Japan [6], United States [9], Sweden [27], Malaysia [25]; an isolated case is that of Chile [17]. This bibliographic review demonstrates that there is no important, and even less reliable, information from developing countries, as is the case of Ecuador, with respect to the traffic sign data bases; this generates a challenge to raise this type of information, which must also be relevant to ensure road safety and maintenance of road infrastructure.

2.2. Methods for the construction of the traffic sign detection system

The scheme of the system proposed in this research is presented in Figure 1, which shows the segmentation (location) and recognition (classification) stages. In the segmentation process, a set of ROI is generated, which will then be sent to the classification stage for recognition. This proposal only works in the restricted

case of the Stop, Give-way and Speed of 50 km/h and 100 km/h traffic signs. These signs have the color red in common, and belong to the prohibition type.

Figure 1. Proposed scheme for the location and recognition of traffic signs at road intersections in Ecuador in the visible spectrum, for the Stop and Give-way cases; and its subsequent extension to the case of Speed at 50 km/h and 100 km/h.


2.2.1. Segmentation by color and ROI generation

Figure 1 (left) shows the segmentation scheme described below.

Segmentation is done by discriminating the red color of the background from the rest of the colors. Experimentally, the RGBN color space has been chosen because it has a more compact distribution in the channels Bn and Gn, whose values are within the and intervals, respectively. Figure 2a shows the distribution of the red color according to normal, sunny and dark lighting conditions. Figure 2b show the distributions of the classes, where red represents the interest class and blue identifies the non-interest class.

1)       Representative points in space Bn and Gn: To generate a small number of representative points of each class, the grouping algorithm Km-means is used [19]; in this way, Km centroids for each of the classes are obtained. The efficient value of Km has been determined experimentally using the methods of Calinski- Harabasz [29], Davies-Bouldin [30], Gap [31] and Silhouettes [32], obtaining the following values, 30 and 40 for the red and not red (other colors) classes, respectively. Figure 2b shows the centroids of the two classes generated with Km. To generate this figure, samples have been used in three lighting conditions: sunny, normal and dark.




Figure 2. Color distribution in the normalized RGB space Bn and Gn, (a) distribution according to lighting conditions, (b) representation of the interest and non-interest classes, (c) graph of the centroids generated with   Km means.

2)       Classifier design based on k NN: To design this classifier it is important to choose an adequate value of to allow the improvement of discrimination between the interest classes and the background. In this sense, the value of the area under the curve, known as the AUC index, of the ROC curve [33] has been used. The values used for this procedure are between 1 and 8. Table 2 shows the results to choose the best value for k.

Table 2. Choice of the K parameter in K- NN

3)    Post-processing of bodies: Afterwards, using the morphological operators of dilation and erosion [26], certain bodies that do not meet specific size characteristics are eliminated as candidates for traffic signs. Experimental has set several thresholds for this procedure.

4)    Geometric constraints: Finally, the bodies that do not fulfill the height/width relation are eliminated, using thresholds determined experimentally; Table 3 shows the necessary parameters as a function of the reference distance. This distance is part of the collision risk zone of a vehicle.

Table 3. Geometric characteristics that a ROI must fulfill over an image of 640×480 size depending on the reference distance



2.2.2. Recognition of traffic signs

In this stage, the ROIs coming from the segmentation stage are classified to determine if they correspond to a Stop, Give-way or Speed sign, or to another object that is not of interest.

Figure 1 (right) shows the recognition scheme, which consists of the following parts:

1)  Preprocessing of candidates: The images corresponding to the gray scale ROI are transformed, then they are normalized to a size of 32×32 pixels and then the histogram equalization is performed to obtain an image with a uniform distribution of gray levels. This process allows for an increase in the contrast of the image and a reduction of abrupt illumination changes.

2)  Feature extraction: A new version of the HOG descriptor [34] is used to find the representative characteristics of a traffic sign. The innovation developed on this descriptor focuses on varying the size of the cells and the orientations, and finding the best combination adapted to the traffic signs. In this sense, the cells take values of 2 × 2, 4 × 4, 8 × 8 and 16 × 16 pixels. Figure 3 shows this form of division in the four cases. Orientation is obtained by dividing the orientation range without sign of [−90°; 90°] or  in 3,6,9,12 and 15 intervals.


a                                    b


c                                     d

Figure 3. Cell size variation on images of 32 × 32 pixels: (a) 2 × 2, (b)4 × 4, (c) 8 × 8 (d) 16 × 16.

3)  Classification training based on SVM: SVM [18–20] is used with three different cores to try out the best option: linear, polynomial and RBF. For training, three data sets are used that correspond to the Stop, Give-way, and Speed signs and other elements that do not belong to the previous cases.

The best option is chosen over this range of parameters using the AUC index [33]. In total, 60 cases are evaluated combining points 2 and 3, from which the ones that generate the best results are extracted in the next section.

3. Results and discussion

3.1. Perception and processing system

The total traffic sign detection system is presented in Figure 4. The perception system consists of a webcam with USB input at 25 frames per second, a display screen and a camera support. The processing system is a computer installed on the experimental vehicle ViiA. This vehicle incorporates a 12 V-120 AC power source that continuously supplies electrical power for the operation of the road system.

Figure 4. System of traffic sign detection in Ecuador, for the Stop, Give-way and Speed (50 and 100) signs, installed on the windshield of an experimental vehicle.

Currently, this system is easy to install in any type of vehicle and does not interfere with driving thanks to its small size.

3.2. Training, validation and experimentation database

The training and validation databases have been built with images of traffic signs from Ecuador, taken in the cities of Latacunga, Ambato, Salcedo, Quito and Sangolquí, in real driving scenarios, in different lighting conditions during the day. These conditions correspond to the cases of normal, sunny and cloudy. More details are found in Table 4.

Table 4. Environmental conditions for the acquisition of images



Table 5 shows the size of the training and validation sets obtained by means of the Holdout method [35] and in Figure 5 several positive and negative examples are observed.

Table 5. Size of the training sets and validation by Stop, Give-way and negative signs











Figure 5. Examples of the traffic sign database for Ecuador under different lighting and status conditions, (a) Stop, (b) Give-way, (c) limit of 50 km/h, (d) limit of 100 km/h and (e) negative examples.

To increase the size of the training set, the images were randomly rotated to a total of five times the original size. In this way, the variability of the database is increased.

Subsequently, to verify the operation of the system, a database with videos was built in real driving situations, in the visible spectrum under different lighting conditions. This base consists of five specimens under different lighting conditions, where the signals have been manually located for evaluation purposes [33].

3.3. Analysis of results

The results can be summarized in the following points:

1)  For the case of color segmentation, the classification algorithm generates an AUC of 0.986, with k = 4 and Km = 30 for red class and k = 4 and Km = 30 for other colors class.

2)  For the classification, the best parameters of the HOG descriptor are cells of 8 × 8 pixels, blocks of 2 × 2 cells with simple overlap, 9 unsigned orientations and C = 215, r = 0, = 1/m polynomial SVM parameters, where m is the size of the feature vector. Table 6 presents the results for the case of 8×8 pixels, where the best result is highlighted in bold.

Table 6. Classification results with HOG characteristics with cells of 8 × 8 pixels in all orientations.

To measure the detection power, the curve of the false negative rate (loss rate) versus the false positive rate, in a logarithmic scale in the range of 0.01–1m [36], is presented in Figure 6. This shows that the best performance is on normal days with a loss rate of 13% and the worst execution is on sunny days with a loss rate of 28%.

The system has an excellent performance, with an average accuracy of 96%. The worst accuracy is achieved in sunny conditions, since the excess of light prevents a correct segmentation for the generation of ROIS, see Table 7.

Figure 6. DET curve of the traffic sign detection system, separated in different lighting conditions and globally.



Table 7. Results of the traffic sign detection system in different lighting scenarios during the day

a Real positive rate, b False negative rate

c Real negative rate, d False positive rate

e Accuracy, f Precision

Several examples generated by the system can be seen in Figures 7, 8, 9 and 10. The samples are in various lighting conditions during the day, dawn and early evening, when traveling through urban areas and highway areas around the cities of Quito and Sangolquí.




Figure 7. Results of the traffic sign detection system in the case of Stop signs, during a sunny day on a highway; (a) input image, (b) ROI and (c) detections.




Figure 8. Results of the traffic sign detection system in the cases Stop and Give-way signs, during a dark day in an urban area; (a) input image, (b) ROI and (c) detections.






Figure 9. Results of the traffic sign detection system in the case of Speed of 50 sign, during a dark day (at dawn) in urban area; (a) input image, (b) ROI and (c) detections.




Figure 10. Results of the traffic sign detection system in the case of Speed of 100, during a dark day in urban area; (a) input image, (b) ROI and (c) detections.

3.4. Computation times

Table 8 shows the computation time of the global system.

Table 8. Total computation times of the traffic sign detection system in Ecuador in the visible spectrum in the cases of Stop, Give-way signs.

These results are the average values of the processing of images of pixels, distributed as follows: 9999 in sunny, 14744 in normal and 12442 in cloudy.

From these experimental results it can be verified that the computation times, in the cases of segmentation and recognition, are quite short and therefore competitive to be part of applications in real-time systems.

4. Conclusions and future work

In this research work, in the field of driving assistance systems with emphasis on the detection of traffic signs, the following original contributions were made:

•   The construction of a new database for the recognition of traffic signs in Ecuador, in the cases of Stop, Give-way and Speed signs. This information is available for the free use of the scientific community.

•   The development of a new color segmentation method for the generation of ROI using the k–NN classifier together with the Km means means clustering algorithm. This implementation efficiently covers the scenarios of normal, sunny and dark lighting during the day. In addition, distance is included as a reference parameter for the ROI preselection. In this way, this proposal reaches a classification rate of 98.7% in the pixels of interest and the background.

•   The implementation of a new version of the HOG descriptor consisting of cells of 8×8 pixels, blocks of 2 × 2 cells with simple overlap and 9 orientations without sign. The classification rate is 99.49 using SVM with a polynomial core.

•    The construction of a system to detect traffic signs in Ecuador, specialized in the Stop and Give-way cases. The DET curve indicates that its performance is 96%, so it is competitive regarding the proposals present in the state of the art.

•   The construction of a driver assistance system that works in quasi-real time, that is, at 21.58 frames per second, is a system that is easy to install in a vehicle for everyday use.

For the future, this methodology will be extended to all the traffic signs of the prohibition type in Ecuador, where the rest of



the speed limit signs for urban areas and highways are located. Finally, it is worth indicating that a method to check and compare the quality of the classifier will be introduced. For this purpose, a

method based on ELM is being prepared.


The vehicle used for the development of a significant part of this project has been provided by Technologies I&H company, for which gratitude is due. In addition, we thank the anonymous reviewers for their valuable input as they have contributed significantly to the improvement of this manuscript.


[1]   INEN, RTE INEN 004-1:2011. Señalización vial. Parte 1. Señalización vertical, Instituto Ecuatoriano de Normalización Std., 2011. [Online]. Available:

[2]   OMS. (2018) Lesiones causadas por el tránsito. Organización Mundial de la Salud. [Online]. Available:

[3]  K. Schwab, “The global competitiveness report   2015–2016,” World Economic Forum, Tech. Rep., 2015. [Online]. Available:

[4]   ANT. (2017) Siniestros septiembre 2017. Agencia Nacional de Tránsito, Ecuador. [Online]. Available:

[5]    ——. (2015) Siniestros octubre 2015. Agencia Nacional de Tránsito, Ecuador. [Online]. Available:

[6]   B. T. Nguyen, S. J. Ryong, and K. J. Kyu, “Fast traffic sign detection under challenging conditions,” in 2014 International Conference on Audio, Language and Image Processing, July 2014, pp. 749–752. doi: 1109/ICALIP.2014.7\protect\kern+.1667em\ relax009 \protect\kern +.1667em\relax 895.

[7]   H. Gomez-Moreno, S. Maldonado-Bascon, P. Gil-Jimenez, and S. Lafuente-Arroyo, “Goal evaluation of segmentation algorithms for traffic sign recognition,” IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 4, pp. 917–930, 2010. doi: 10.1109/TITS.2010.2054084.

[8]   A. Shaout, D. Colella, and S. Awad, “Advanced driver assistance systems - past, present and   future,” in Computer Engineering Conference (ICENCO), 2011 Seventh International, Dec 2011, pp. 72–82. doi: 10.1109/ICENCO.2011.6\protect\kern+.1667em \relax 153\protect\kern +.1667em\relax 935.

[9]  A. Møgelmose, D. Liu, and M. M. Trivedi, “Detection of u.s. traffic signs,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 6, pp. 3116–3125, Dec

2015. doi: 2015.2433019.

[10] S. E. Perez-Perez, S. E. Gonzalez-Reyna, S. E. Ledesma-Orozco, and J. G. Avina-Cervantes, “Principal component analysis for speed limit traffic sign recognition,” in 2013 IEEE International Autumn Meeting on Power Electronics and Computing (ROPEC), Nov. 2013, pp. 1-5 doi: ROPEC. 2013.6\protect \kern+.1667em\ relax702\ protect \kern+.1667em\relax 716.

[11] S. Salti, A. Petrelli, F. Tombari, N. Fioraio, and L. D. Stefano, “Traffic sign detection via interest region extraction,” Pattern Recognition, vol. 48, no. 4, pp. 1039–1049, 2015. doi: 2014.05.017.

[12] H. Li, F. Sun, L. Liu, and L. Wang, “A novel traffic sign detection method via color segmentation and robust shape matching,” Neurocomputing, vol. 169, pp. 77–88, 2015. doi: 12.111.

[13] T. hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, S. S. B. Media, Ed., 2009. [Online]. Available:

[14] J. Lillo-Castellano, I. Mora-Jiménez, C. Figuera-Pozuelo, and J. Rojo-Álvarez, “Traffic sign segmentation and classification using statistical learning methods,” Neurocomputing, vol. 153, pp. 286–299, 2015. doi: 2014.11.026.

[15] T. Chen and S. Lu, “Accurate and efficient traffic sign detection using discriminative adaboost and support vector regression,” IEEE Transactions on Vehicular Technology, vol. 65, no. 6, pp. 4006–4015, June 2016. doi:

[16] Y. Han, K. Virupakshappa, and E. Oruklu, “Robust traffic sign recognition with feature extraction and k-nn classification methods,” in 2015 IEEE International Conference on Electro/Information Technology (EIT), May 2015, pp. 484–488. doi: 10.1109/EIT.2015.7\protect \kern +.1667em\ relax 293\protect \kern+.1667em\relax 386.

[17] G. Villalón-Sepúlveda, M. Torres-Torriti, and M. Flores-Calero, “Sistema de detección de señales de tráfico para la localización de intersecciones viales y frenado anticipado,” Revista Iberoamericana de Automática e Informática Industrial RIAI, vol. 14, no. 2, pp. 152-162, 2017. doi: 09.010.

[18] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learnig, vol. 20, no. 3, pp. 273-297, Sep 1995. doi: /BF00994018. 

[19]   R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, 2nd ed., J. W. . Sons, Ed., 2012. [Online]. Available:



[20] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press, 2000. doi:

[21] Z. Huang, Y. Yu, and J. Gu, “A novel method for traffic sign recognition based on extreme learning machine,” in Proceeding of the 11th World Congress on Intelligent Control and Automation, June 2014, pp. 1451–1456. doi: 1109/WCICA.2014.7\protect \kern+.1667em\relax 052\protect \kern +.1667em\relax 932.

[22] J. H. Shi and H. Y. Lin, “A vision system for traffic sign detection and recognition,” in 2017 IEEE 26th International Symposium on Industrial Electronics (ISIE), June 2017, pp. 1596–1601. doi: 2017.8\protect \kern +.1667em\relax001\protect \kern+.1667em\relax 485.

[23] N. Dalal, “Finding People in Images and Videos,” Theses, Institut National Polytechnique de Grenoble - INPG, 2006. [Online]. Available:https: //

[24] H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008. doi: j.cviu.2007.09.014.

[25] M. M. Lau, K. H. Lim, and A. A. Gopalai, “Malaysia traffic sign recognition with convolutional neural network,” in 2015 IEEE International Conference on Digital Signal Processing (DSP), July 2015, pp. 1006–1010. doi: 2015.7\protect \kern + .1667em\relax 252\protect \kern +.1667em\relax 029.

[26] G. P. Martinsanz and J. M. de la Cruz García, Visión por computador: imágenes digitales y aplicaciones, R.-M. S. E. y Publicaciones, Ed., 2008. [Online]. Available: YDjJG6

[27] Y. Zhu, C. Zhang, D. Zhou, X. Wang, X. Bai, and W. Liu, “Traffic sign detection and recognition using fully convolutional network guided proposals,” Neurocomputing, vol. 214, pp. 758–766, 2016. doi: 16/j.neucom.2016.07.009

[28] Z. Zuo, K. Yu, Q. Zhou, X. Wang, and T. Li, “Traffic signs detection based on faster r-cnn,” in 2017 IEEE 37th International Conference on Distributed Computing Systems

Workshops (ICDCSW), June 2017, pp. 286–288. doi:

[29] T. Calinski and J. Harabasz, “A dendrite method for cluster analysis,” Communications in Statistics, vol. 3, no. 1, pp. 1–27, 1974. doi:

[30] D. L. Davies and D. W. Bouldin, “A cluster separation measure,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-1, no. 2, pp. 224–227, April 1979. doi: https: // TPAMI. 1979.4766909.

[31] T. Robert, W. Guenther, and H. Trevor, “Estimating the number of clusters in a data set via the gap statistic,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 63, no. 2, pp. 411–423. doi:\ protect \kern +.1667em\relax 293, 2001.

[32]   P. J. Rousseeuw, “Silhouettes: A graphical aid to the interpretation and validation of cluster analysis,” Journal of Computational and Applied Mathematics, vol. 20, pp. 53–65, 1987. doi:

[33] T. Fawcett, “Roc graphs: Notes and practical considerations for researchers,” Tech. Rep., 2004. Available:

[34] N. Dalal and B. Triggs, “Histograms of    oriented gradients for human detection, ”  in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1, June 2005, pp. 886–893. doi:

[35] R. Kohavi, “A study of cross-validation and bootstrap for accuracy estimation and model selection,” in Proceedings of the 14th International Joint Conference on Artificial Intelligence - Volume 2, ser. IJCAI’95. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1995, pp. 1137–1143. [Online]. Available:

[36] A. Martin, G. Doddington, T. Kamm, M. Ordowski, and M. Przybocki, “The det curve in assessment of detection task performance,” 1997, pp. 1895–1898. [Online]. Available: