Japan’s Draft AI Guidelines Aim to Curb Deepfakes and Build Digital Literacy

Share this:

Japan is preparing new guidelines on the responsible use of artificial intelligence (AI), urging companies to take steps to curb the spread of deepfakes and other misleading AI-generated content while promoting AI literacy among users, according to a draft outline reported by Kyodo News on Wednesday.

Although the proposed guidelines will not be legally binding, they encourage transparency from AI developers and operators by requiring them to disclose key information about how their systems work. The move aims to address growing public concern over the misuse of generative AI technologies.

The initiative follows the implementation of Japan’s new AI law, which came fully into effect in September. The government is now drafting the guidelines as part of a broader policy framework to promote the safe and beneficial use of AI across industries.

The draft highlights several key responsibilities for both AI developers and users. Developers and service providers are urged to establish clear policies regarding data collection and training practices to reduce risks of biased outputs and privacy breaches.

They are also encouraged to apply the latest tools and expertise to tackle issues such as “hallucinations,” where AI generates inaccurate or fabricated information.

For individual and corporate users, the guidelines stress the importance of awareness and caution, warning against the potential risks of bias reinforcement, misinformation, and criminal misuse of AI-generated content.

The document also calls for greater accountability among central and local governments using AI technologies, emphasizing that administrative decisions involving AI must remain transparent and explainable to citizens.

Each ministry and municipality will be required to appoint a dedicated official responsible for managing AI-related risks and ensuring ethical compliance. – BERNAMA

HackWarn Opinion: A Responsible Step Toward AI Transparency and Safety

Japan’s move to draft AI guidelines reflects a growing global recognition that deepfakes and AI-generated misinformation are not just technical challenges, they are social and ethical threats.

At HackWarn, we believe that AI literacy knowing how to identify, question, and verify AI-generated content is just as important as building the technology itself.

Deepfakes and “AI hallucinations” can easily blur the line between truth and fiction, making it vital for both companies and individuals to understand how these tools work and what risks they carry.

Read more about Deepfake here: The Rise of Deepfake Scams: What They Are and How to Detect and Prevent AI-Powered Fraud:

Share this:

Leave a Reply

Your email address will not be published. Required fields are marked *