MediaPipe-LSTM: Multi-Task Pose Recognition for Safety and Creative Quality Control
Keywords:
Computer Vision, MediaPipe, LSTM, Pose Estimation, Anomaly DetectionAbstract
Spinal Muscular Atrophy (SMA) remains a critical genetic disease requiring early detection, yet conventional methods like PCR and genetic sequencing suffer from high costs, extended processing times, and limited accuracy in detecting minor mutations. This study addresses these challenges by developing an innovative integrated system that combines CRISPR-Cas biotechnology with artificial intelligence to revolutionize genetic disease detection. The research employs CRISPR system remodeling to optimize guide RNA design targeting SMN1 and SMN2 genes, integrated with a hybrid deep learning model combining Convolutional Neural Network and XGBoost for intelligent mutation prediction. Unlike traditional approaches, this system achieves detection accuracy exceeding 96.5% while significantly reducing processing time through automated AI-driven interpretation of CRISPR signals. The integration enables real-time analysis of complex genetic patterns, minimizes false detection rates, and generates precision-based therapy recommendations tailored to individual mutation profiles. This breakthrough offers substantial advantages over existing methods by providing faster, more accurate, and cost-effective genetic screening suitable for neonatal programs, particularly in resource-limited settings. The system demonstrates strong potential for clinical implementation, supporting early intervention strategies that can dramatically improve patient outcomes. By bridging molecular biology and computational intelligence, this research contributes a transformative framework for genetic disease detection that is scalable, efficient, and clinically applicable, paving the way for personalized medicine approaches in managing hereditary disorders.
References
M. J. P. van Zuijlen, H. Lin, K. Bala, S. C. Pont, and M. W. A. Wijntjes, “Materials in Paintings (MIP): An interdisciplinary dataset for perception, art history, and computer vision,” PLoS One, vol. 16, no. 8 August 2021, Aug. 2021, doi: 10.1371/journal.pone.0255109.
G. V. R. M. Kumar and D. Madhavi, “Stacked Siamese Neural Network (SSiNN) on Neural Codes for Content-Based Image Retrieval,” IEEE Access, vol. 11, pp. 77452–77463, 2023, doi: 10.1109/ACCESS.2023.3298216.
J. Anvar Shathik, A. Saroliya, G. Suhasini, S. Borase, N. Noor Alleema, and N. A. R, “(on-line version) SMART VISION SYSTEMS FOR PUBLIC SAFETY: REAL-TIME CROWD MONITORING AND ANOMALY DETECTION IN URBAN SPACES USING DEEP LEARNING AND EDGE COMPUTING,” Int J Appl Math (Sofia), vol. 38, no. 6s, p. 2025.
Z. Ouardirhi, S. A. Mahmoudi, and M. Zbakh, “Enhancing Object Detection in Smart Video Surveillance: A Survey of Occlusion-Handling Approaches,” Feb. 01, 2024, Multidisciplinary Digital Publishing Institute (MDPI). doi: 10.3390/electronics13030541.
M. I. Basheer Ahmed et al., “A Real-Time Computer Vision Based Approach to Detection and Classification of Traffic Incidents,” Big Data and Cognitive Computing, vol. 7, no. 1, Mar. 2023, doi: 10.3390/bdcc7010022.
G. Yang et al., “STA-TSN: Spatial-Temporal Attention Temporal Segment Network for action recognition in video,” PLoS One, vol. 17, no. 3 March, Mar. 2022, doi: 10.1371/journal.pone.0265115.
X. Wang, J. Yang, and N. K. Kasabov, “Integrating Spatial and Temporal Information for Violent Activity Detection from Video Using Deep Spiking Neural Networks,” Sensors, vol. 23, no. 9, May 2023, doi: 10.3390/s23094532.
A. H. Sadhin, S. Z. M. Hashim, H. Samma, and N. Khamis, “YOLO: A Competitive Analysis of Modern Object Detection Algorithms for Road Defects Detection Using Drone Images,” Baghdad Science Journal, vol. 21, no. 6, pp. 2167–2181, 2024, doi: 10.21123/bsj.2023.9027.
K. Ragil, K. Dyansyah, S. Dwi Purwantoro, M. Ilmi, and R. Wulanningrum, “Program Studi Teknik Informatika,” 2025.
R. Tous, “Lester: Rotoscope Animation through Video Object Segmentation and Tracking,” Algorithms, vol. 17, no. 8, Aug. 2024, doi: 10.3390/a17080330.
S. Natha, M. Siraj, F. Ahmed, M. Altamimi, and M. Syed, “An Integrated CNN-BiLSTM-Transformer Framework for Improved Anomaly Detection Using Surveillance Videos,” IEEE Access, vol. 13, pp. 95341–95357, 2025, doi: 10.1109/ACCESS.2025.3574835.
W. Ullah, A. Ullah, T. Hussain, Z. A. Khan, and S. W. Baik, “An efficient anomaly recognition framework using an attention residual lstm in surveillance videos,” Sensors, vol. 21, no. 8, Apr. 2021, doi: 10.3390/s21082811.
I. Elkhrachy, “3D Structure from 2D Dimensional Images Using Structure from Motion Algorithms,” Sustainability (Switzerland), vol. 14, no. 9, May 2022, doi: 10.3390/su14095399.
V. C. Lungu-Stan and I. G. Mocanu, “3D Character Animation and Asset Generation Using Deep Learning,” Applied Sciences (Switzerland), vol. 14, no. 16, Aug. 2024, doi: 10.3390/app14167234.
F. Liu and K. Deng, “AI Knows Aesthetics: AI-Generated Interior Design Identification Using Deep Learning Algorithms,” IEEE Access, vol. 13, pp. 87621–87639, 2025, doi: 10.1109/ACCESS.2025.3570509.
M. Rathee, B. Bačić, and M. Doborjeh, “Automated Road Defect and Anomaly Detection for Traffic Safety: A Systematic Review,” Jun. 01, 2023, MDPI. doi: 10.3390/s23125656.
C. B. Lin, Z. Dong, W. K. Kuan, and Y. F. Huang, “A framework for fall detection based on openpose skeleton and lstm/gru models,” Applied Sciences (Switzerland), vol. 11, no. 1, pp. 1–20, Jan. 2021, doi: 10.3390/app11010329.
N. Hernández-Díaz, Y. C. Peñaloza, Y. Y. Rios, J. C. Martinez-Santos, and E. Puertas, “A computer vision system for detecting motorcycle violations in pedestrian zones,” Multimed Tools Appl, vol. 84, no. 13, pp. 12659–12682, Apr. 2025, doi: 10.1007/s11042-024-19356-9.
I. B. A. Peling, M. P. A. Ariawan, G. B. Subiksa, and I. K. A. G. Wiguna, “Pendeteksi Keberadaan Orang Asing Menggunakan Face Recognition dan Motion Detection,” Jurnal Bangkit Indonesia, vol. 13, no. 1, pp. 18–23, Mar. 2024, doi: 10.52771/bangkitindonesia.v13i1.275.
K. Kritsis, A. Gkiokas, A. Pikrakis, and V. Katsouros, “DanceConv: Dance Motion Generation With Convolutional Networks,” IEEE Access, vol. 10, pp. 44982–45000, 2022, doi: 10.1109/ACCESS.2022.3169782.
S. Rustiyanti, W. Listiani, A. E. Ningdyah, and S. Dwiatmini, “PENERAPAN COMPUTER VISION DALAM ESTIMASI POSE DAN PROSES KREATIF PENCAK SILAT TRADISI SEBAGAI SUMBER KOREOGRAFI RANCAK TAKASIMA APPLICATION OF COMPUTER VISION IN POSE ESTIMATION AND THE CREATIVE PROCESS OF THE TRADITIONAL PENCAK SILAT AS A RANCAK TAKASIMA CHOREOGRAPHY”, doi: 10.47002/seminastika.v5i1.788.
J. Jung, H. Kim, and J. Park, “Deep Fashion Designer: Generative Adversarial Networks for Fashion Item Generation Based on Many-to-One Image Translation,” Electronics (Switzerland), vol. 14, no. 2, Jan. 2025, doi: 10.3390/electronics14020220.
H. An and M. Park, “An AI-based Clothing Design Process Applied to an Industry-university Fashion Design Class,” Journal of the Korean Society of Clothing and Textiles, vol. 47, no. 4, pp. 666–683, 2023, doi: 10.5850/JKSCT.2023.47.4.666.
A. D. Firmanto, A. Aprilia, P. N. Media, and K. Jakarta, “Deteksi Cacat Produk Kemasan Karton Lipat Pada Minuman Berbasis Computer Vision,” 2024, doi: 10.46961/jommit.v8i1.
Y. Zhang et al., “GLNet-YOLO: Multimodal Feature Fusion for Pedestrian Detection,” AI (Switzerland), vol. 6, no. 9, Sep. 2025, doi: 10.3390/ai6090229.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Raymond Divian Nathaniel

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
