LEGAL UPDATES BY BEPARTNERS
Entering the AI Era:

How Prepared Are We to Protect Personal Data?

10 May 2023

ChatGPT Artificial Intelligence (“AI”) technology developed by OpenAI has become a trendy topic recently. Although still in the development stage, ChatGPT already has the ability to self-learn, diagnose problems, provide answers and solutions faster than humans, write essays, create business plans, and even replace human roles in programming.

 

All these capabilities are obtained through processing billions of parameters and information from various information sources, some of which may contain elements of personal data that have been entered from public sources. It is reasonable that concerns about privacy and data protection in the future are increasingly relevant, especially since AI technologies can analyze and process data automatically, massively, and indiscriminately, without the ability to seek prior consent from the affected data subjects.

 

The main question is, how prepared are we to face the challenges of personal data protection in the AI era?

 

Personal Data Protection Challenges in the AI Era

 

Automated data processing and the use of AI technologies have the potential to change the way we live and work by increasing productivity. However, the automated collection and processing of personal data, such as identity information, including health and financial data, can raise privacy concerns. In addition, public awareness in the digital era is low, and many people still do not know how their personal data are being accessed and utilised by companies, further increasing the risk of data misuse.

 

It must be understood that AI technology derives its intelligence from its ability to process billions of data massively. It is therefore, impractical to expect that AI should sort out what data it can process and what it cannot, especially when such data is accessible in real-time on the internet. Are existing data protection regulations sufficient to meet the growing challenges of AI?

 

In fact, Law No. 27 of 2022 regarding Personal Data Protection (“PDP Law”), published last year, included automated decision-making and data processing using new technologies as high-risk data processing activities. However, it is still unclear how the PDP Law and its implementing regulations will address these issues. Therefore, there are still many questions as to whether the PDP Law is sufficiently equipped to deal with the challenges posed by AI.

 

 

Are We Prepared?

 

In the context of the AI era, challenges in personal data protection are increasingly complex and require stronger personal data protection regulations. Current data protection regulations still need to be fully prepared for the challenges posed by the development of AI technology, such as the problem of increasingly large data volumes, increasingly sophisticated AI technology, legal limitations, and low public awareness.

 

To overcome these challenges, the PDP Law must be continuously updated and strengthened in line with the development of AI technology. Regulations with more specific regulatory methods regarding automated decision-making and data processing using new technologies, as well as strict and effective supervision of AI companies and developers in the use of AI technologies are needed to ensure that the privacy and protection of subjects' personal data are maintained and upheld. By doing so, it is hoped that existing personal data will not be misused.

 

Proactive Solution

 

Amidst the limitations of regulations in facing the challenges of personal data protection in the AI era, efforts are needed to enhance personal data protection through the adoption of a proactive approach by companies, providing training and certification, and raising public awareness on the importance of personal data protection.

 

In addition, several things can be done to strengthen personal data protection in the face of AI, namely:

 

  1. Provide Training and Certification: Companies should provide training and certification for employees involved in personal data processing and AI use. This training should cover privacy practices, good data processing ethics, risk management, and data security. Certification will help to ensure that employees at a company are knowledgeable in personal data processing and apply appropriate privacy practices.

 

  1. Promote Transparency and Accountability: Companies should increase transparency in collecting and processing personal data. They should provide clear and understandable information about their privacy practices and give individuals control over their data. In addition, companies should be accountable for their actions and provide reports on their privacy practices on a regular basis.

 

  1. Role of the Government: The government should play a more active role in ensuring the protection of personal data. Therefore, it is imperative to immediately establish the data protection agency mandated in the PDP Law. Furthermore, the government should also increase the supervision of companies that collect and process personal data and provide stricter sanctions for violators. In addition, the government should also conduct counseling and education campaigns on good privacy practices for the public.

 

It should be noted that how well-prepared the current data protection regulations are for the AI era is still up for debate. However, it is clear that efforts must continue to be made to strengthen the protection of personal data from being misused by the use of AI technologies. This can be accomplished through collaboration between governments, companies, and communities to raise awareness and adopt good privacy practices.

©2024. BE Partners. All Rights Reserved
RELATED LEGAL UPDATES