Legal Protection for Victims of Artificial Intelligence Misuse in the Form of Deepfake Technology
Abstract
The development of technology has reached the Fourth Industrial Revolution, marked by the integration of information technology into the industrial world. In this era, information technology plays a crucial role in human daily activities, including the presence of artificial intelligence as a tool to assist human activities in the digital age. Artificial intelligence is like a double-edged sword—it can help humans fulfill their needs, but on the other hand, it can also harm them if misused. One form of artificial intelligence misuse is through deepfake technology. This AI-driven technology, which can manipulate a person’s facial visuals in both video and photo formats, is increasingly being exploited by irresponsible parties. This article discusses legal protection for victims of deepfake misuse, focusing on regulations, resolution mechanisms, and the right to be forgotten. This research employs a normative legal research method with a conceptual approach. The findings of this study indicate that legal protection for victims of deepfake misuse can be achieved through strengthening regulations on AI usage, adopting a restorative justice approach for victims, and ensuring the right to be forgotten to safeguard their online reputation.
Keywords
Full Text:
PDFReferences
Abidin, D. Z. (2015). Kejahatan dalam Teknologi Informasi dan Komunikasi. Jurnal Ilmiah Media Processor, 10(2).
Gandrova, S., & Banke, R. (2023). Penerapan Hukum Positif Indonesia Terhadap Kasus Kejahatan Dunia Maya Deepfake. Madani: Jurnal Ilmiah Multidisiplin, 1(10).
Gunawan, I. J., & Janisriwati, S. (2023). Legal Analysis on the Use of Deepfake Technology: Threats to Indonesian Banking Institutions. Law and Justice, 8(2). https://doi.org/10.23917/laj.v8i2.2513
Irianto, S. (2017). Metode Penelitian Kualitatif Dalam Metodologi Penelitian Ilmu Hukum. Jurnal Hukum & Pembangunan, 32(2). https://doi.org/10.21143/jhp.vol32.no2.1339
Laza, J. M., & Karo Karo, R. (2023). Perlindungan Hukum Terhadap Artificial Intellegence Dalam Aspek Penyalahgunaan Deepfake Technology Pada Perspektif UU PDP dan GDPR [Legal Protection of Artificial Intellegence in Misusage of Deepfake Technology in the Perspective of PDP Law and GDPR]. Lex Prospicit, 1(2). https://doi.org/10.19166/lp.v1i2.7386
Leliana, I., Irhamdhika, G., Haikal, A., Septian, R., & Kusnadi, E. (2024). Etika Dalam Era Deepfake: Bagaimana Menjaga Integritas Komunikasi. Jurnal Visi Komunikasi, 22(02). https://doi.org/10.22441/visikom.v22i02.24229
Marune, A. E. M. S. (2023). Metamorfosis Metode Penelitian Hukum. Civilia: Jurnal Kajian Hukum Dan Pendidikan Kewarganegaraan, 2(4).
Muhammad Rifki Noval, S. (2019). Perlindungan Hukum Terhadap Korban Penyalahgunaan Data Pribadi: Penggunaan Teknik Deepfake. Seminar Nasional Hasil Penelitian & Pengabdian Kepada Masyarakat (SNP2M), November.
Mutmainnah, A., Suhandi, A. M., & Herlambang, Y. T. (2024). Problematika Teknologi Deepfake sebagai Masa Depan Hoax yang Semakin Meningkat: Solusi Strategis Ditinjau dari Literasi Digital. UPGRADE : Jurnal Pendidikan Teknologi Informasi, 1(2). https://doi.org/10.30812/upgrade.v1i2.3702
Novyanti, H. (2021). Jerat Hukum Penyalahgunaan Aplikasi Deepfake Ditinjau Dari Hukum Pidana. Novum: Jurnal Hukum, 1(1).
Rifa’i, I. J. (2023). Ruang Lingkup Metode Penelitian Hukum. Metodologi Penelitian Hukum.
Saputra, A. (2018). Implementasi Kecerdasan Buatan Pada Proses Bisnis. Implementasi Kecerdasan Buatan Pada Proses Bisnis, 8(artifisial intelejen dalam bisnis).
Sulaiman, S. (2018). Paradigma dalam Penelitian Hukum. Kanun Jurnal Ilmu Hukum, 20(2). https://doi.org/10.24815/kanun.v20i2.10076
DOI: https://doi.org/10.57235/aurelia.v4i1.5519
Refbacks
- There are currently no refbacks.
Copyright (c) 2025 Gregorius Widiartana, Ebenhaezar Parlindungan Lumbanraja

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.