Semantic Video Segmentation with Using Ensemble of Particular Classifiers and a Deep Neural Network for Systems of Detecting Abnormal Situations

O. Amosov, Y. Ivanov, S. Zhiganov


A new approach based on the use of a deep neural network and an ensemble of particular classifiers is proposed. This approach is based on use of the novel block of fuzzy generalization for combines classes of objects into semantic groups, each of which corresponds to one or more particular classifiers. As result of processing, the sequence of frames is converted into the annotation of the event occurring in the video for a certain time interval.


semantic segmentation; automatic image annotation; deep neural network; abnormal situations


Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov 1998.

E. Shelhamer, J. Long, T. Darrell, “Fully Convolutional Networks for Semantic Segmentation," IEEE Trans. Pattern Anal. Mach. Intell. vol. 39, no. 4, pp. 640-651, 2017.

J. Donahue, L. Hendricks, M. Rohrbach, S. Venugopalan, S. Guadarrama, K. Saenko, T. Darrell. "Long-term Recurrent Convolutional Networks for Visual Recognition and Description," IEEE Trans Pattern Anal Mach Intell, vol. 39, no. 4, pp. 677-691, 2017.

Y Pan, T Mei, T Yao, H Li, Y Rui, "Jointly Modeling Embedding and Translation to Bridge Video and Language," The IEEE Conference on Computer Vision and Pattern Recognition, pp. 4594-4602, 2016.

Rohrbach, M. Rohrbach, B. Schiele, "The Long-Short Story of Movie Description," Proceedings of the German Confeence on Pattern Recognition, vol. 9358, pp. 209-221, 2015.

Karpathy, L. Fei-Fei. "Deep visual-semantic alignments for generating image descriptions," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 664-676. April 2017.

(2016) Full ImageNet Network. [Online]. Available:

(2016) Large Scale Visual Recognition Challenge 2016 (ILSVRC2016). [Online]. Available:

(2016) Deep Learning GPU Training System. [Online]. Available:

C. Sammut, G. Webb "Encyclopedia of machine learning" Springer US, p. 1031, 2010

P. Barnum, S. Narasimhan, T. Kanade “Analysis of Rain and Snow in Frequency Space,” International Journal of Computer Vision, vol. 86, p. 256, 2010.

A. Yu, H. Bai, Q. Jiang, Z. Zhu, C. Huang, B. Hou, "Blurred license plate recognition via sparse representations," IEEE Conference on Industrial Electronics and Applications, pp. 1657-1661, 2014.

K. K. Pal and K. S. Sudeep, "Preprocessing for image classification by convolutional neural networks," IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology, pp. 1778-1781, 2016.

O.S. Amosov, Y.S. Ivanov and S.V. Zhiganov, “Human localiztion in video frames using a growing neural gas algorithm and fuzzy inference,” Computer Optics, 2017, vol. 41(1), pp. 46-58. DOI: 10.18287/2412-6179-2017-41-1-46-58.

O.S. Amosov, S.G. Baena, Y.S. Ivanov and Soe Htike, “Roadway Gate Automatic Control System with the Use of Fuzzy Inference and Computer Vision Technologies,” The 12th IEEE Conference on Industrial Electronics and Applications, ICIEA, 2017, p. 6.

D. Ciresan, U. Meier, J. Masci, L. Gambardella, J. Schmidhuber. “Flexible, high performance convolutional neural networks for image classification,” AAAI Press. In Proceedings of the Twenty-Second international joint conference on Artificial Intelligence, vol. 2, pp. 1237-1242, 2011.

(2017) CNN Models by CVGJ. [Online]. Available:

(2017) Transfer Learning - Machine Learning's Next Frontier. [Online]. Available:

S. Pan and Q. Yang, “A Survey on Transfer Learning,” IEEE Trans. on Knowl. and Data Eng, vol. 22, no. 10, pp. 1345-1359, October 2010.

J. Long, E. Shelhamer, T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” CoRR abs/1411.4038, 2014.

(2017) Model Zoo. [Online]. Available:

(2016) Large Scale Visual Recognition Challenge. [Online]. Available:

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” CoRR abs/abs/1512.00567, 2015.

(2017) ImageNet. [Online]. Available:

(2016) CNN-benchmarks. [Online]. Available:

T. Deselaers and V. Ferrari, “Visual and semantic similarity in ImageNet,” CVPR 2011, pp. 1777-1784, 2011.

A. Farhadi, I. Endres, D. Hoiem and D. Forsyth, “Describing objects by their attributes,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1778-1785, 2009.

Y. Su, F. Jurie, “Improving Image Classification Using Semantic Attributes,” International Journal of Computer Vision, vol. 100, no.1, pp 59–77, October 2012.

M. Omri, N. Chouigui, "Measure of Similarity between Fuzzy Concepts for Optimization of Fuzzy Semantic Nets", CoRR abs/1206.1624, 2012.

L. Yang, P. Luo, C. C. Loy, X. Tang, "A Large-Scale Car Dataset for Fine-Grained Categorization and Verification" CoRR abs/1506.08959, 2015.

(2017) Deep ANPR [Online]. Available:

(2016) Computer Vision Assisted UAV detection and tracking [Online]. Available:

G. Levi and T. Hassncer, "Age and gender classification using convolutional neural networks," IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 34-42, 2015.

I. Masi, A. Tran, J. Leksut, T. Hassner, G. Medioni, "Do We Really Need to Collect Millions of Faces for Effective Face Recognition?" CoRR abs/1603.07057, 2016.

O.S. Amosov, Y.S. Ivanov, and S.V. Zhiganov “Human Localization in the Video Stream Using the Algorithm Based on Growing Neural Gas and Fuzzy Inference,” XII Intelligent Systems Symposium, INTELS'16, Procedia Computer Science, 2017, no 103, pp. 403–409. – DOI: 10.1016/j.procs.2017.01.128.

(2016) COCO dataset. [Online]. Available:

(2017) DAVIS: Densely Annotated Video Segmentation. [Online]. Available:

Full Text: PDF


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.

IT in Innovation IT in Business IT in Engineering IT in Health IT in Science IT in Design IT in Fashion

IT in Industry @ (2012 - ) . . ISSN (Online): 2203-1731; ISSN (Print): 2204-0595