<p>Real Time Static and Dynamic Sign Language Recognition using Deep Learning</p>
Online Publishing @ NISCAIR
View Archive InfoField | Value | |
Authentication Code |
dc |
|
Title Statement |
<p>Real Time Static and Dynamic Sign Language Recognition using Deep Learning</p> |
|
Added Entry - Uncontrolled Name |
Jayanthi, P ; Department of Computer Technology, MIT, Anna University, Chennai 600 044, Tamil Nadu, India Bhama, Ponsy RK Sathia; Department of Computer Technology, MIT, Anna University, Chennai 600 044, Tamil Nadu, India Swetha, K ; Department of Information Technology, MIT, Anna University, Chennai 600 044, Tamil Nadu, India Subash, S A; Department of Information Technology, MIT, Anna University, Chennai 600 044, Tamil Nadu, India |
|
Uncontrolled Index Term |
Deaf-mute people, Human-machine interaction, Inception deep-convolution network, Key frame extraction, Video analytics |
|
Summary, etc. |
<p>Sign language recognition systems are used for enabling communication between deaf-mute people and normal user. Spatial localization of the hands could be a challenging task when hands-only occupies 10% of the entire image. This is overcome by designing a real-time efficient system that is capable of performing the task of extraction, recognition, and classification within a single network with the use of a deep convolution network. The recognition is performed for static image dataset with a simple and complex background, dynamic video dataset. Static image dataset is trained and tested using a 2D deep-convolution neural network whereas dynamic video dataset is trained and tested using a 3D deep-convolution neural network. Spatial augmentation is done to increase the number of images of static dataset and key-frame extraction to extract the key-frames from the videos for dynamic dataset. To improve the system performance and accuracy Batch-Normalization layer is added to the convolution network. The accuracy is nearly 99% for dataset with a simple background, 92% for dataset with complex background, and 84% for the video dataset. By obtaining a good accuracy, the system is proved to be real-time efficient in recognizing and interpreting the sign language gestures.</p> |
|
Publication, Distribution, Etc. |
Journal of Scientific & Industrial Research 2022-12-12 07:02:36 |
|
Electronic Location and Access |
application/pdf http://op.niscair.res.in/index.php/JSIR/article/view/52657 |
|
Data Source Entry |
Journal of Scientific & Industrial Research; ##issue.vol## 81, ##issue.no## 11 (2022): Journal of Scientific & Industrial Research |
|
Language Note |
en |
|
Nonspecific Relationship Entry |
http://op.niscair.res.in/index.php/JSIR/article/download/52657/465570809 |
|