Translator of Indonesian Sign Language Video using Convolutional Neural Network with Transfer Learning

Authors

  • Sesilia Shania
  • Mohammad Farid Naufal Universitas Surabaya
  • Vincentius Riandaru Prasetyo
  • Mohd Sanusi Bin Azmi

DOI:

https://doi.org/10.24002/ijis.v5i1.5865

Abstract

Sign language is a language used to communicate by utilizing gestures and facial expressions. This study focuses on classification of Bahasa Isyarat Indonesia (BISINDO). There are still many people who have difficulty communicating with the deaf people. This study builds video-based translator system using Convolutional Neural Network (CNN) with transfer learning which is commonly used in computer vision especially in image classification. Transfer learning used in this study are a MobileNetV2, ResNet50V2, and Xception. This study uses 11 different commonly used vocabularies in BISINDO. Predictions will be made in real-time scenario using a webcam. In addition, the system given good results in the experiment with an interaction approach between one pair of deaf and normal people. From all the experiments, it was found that the Xception architectures has the best F1 Score of 98.5%.

Downloads

Published

2022-08-27

How to Cite

Shania, S., Farid Naufal, M., Riandaru Prasetyo, V., & Bin Azmi, M. S. (2022). Translator of Indonesian Sign Language Video using Convolutional Neural Network with Transfer Learning. Indonesian Journal of Information Systems, 5(1), 17–27. https://doi.org/10.24002/ijis.v5i1.5865

Issue

Section

Articles