[Survey Results] Over Half Aware of 'Sexual Deepfakes,' Victim Recognition Also Increasing. Countermeasures Against AI-Generated Child Human Rights Violations Include Legal Regulations, Industry Regulations, and Improving Children's AI Literacy.

A survey by Child Fund Japan revealed that over half of respondents are aware of sexual deepfakes, and the recognition of CSAM victims is increasing. Recommendations include legal and industry regulations, and improving children's AI literacy.
ネットサービス・アプリ,暮らし,IT,教育,サステナビリティNQ 94/100出典:prnews

📋 Article Processing Timeline

  • 📰 Published: March 31, 2026 at 22:40

Child Fund Japan (Suginami-ku, Tokyo; Secretary-General: Katsuhiko Takeda), a specified non-profit organization, conducted a national awareness survey on "Generative AI and Child Human Rights Violations" targeting men and women aged 15-79 across Japan in 2026.

The survey results revealed an increase in people aware of sexual deepfakes, and a growing demand for legal and industry regulations to address online child rights violations, as well as for children to acquire self-defense skills such as AI literacy.


Protecting Children's Rights from Generative AI and Deepfakes! Advocacy Activities from the Perspective of Children and Youth.

With the rapid evolution and widespread adoption of generative AI, large volumes of images can now be instantly created and disseminated. In recent times, issues such as Child Sexual Abuse Material (CSAM) and sexual deepfakes generated by AI are spreading, necessitating legal frameworks and increased public awareness.

Child Fund Japan has previously undertaken initiatives to protect children from sexual exploitation, including surveys on grooming targeting youth and creating animated videos to warn children and young people about grooming. Furthermore, in April 2024, they held a symposium titled "The Threat of Generative AI to Children's Rights," and in December, an online seminar to learn about pioneering initiatives overseas. They also launched a working group of experts to propose new legislation to regulate AI-generated CSAM.

As part of these efforts, Child Fund Japan conducted a national awareness survey on the potential for generative AI to violate children's human rights and on possible countermeasures.

Survey Overview

Survey Period: January 25 (Sun) - February 7 (Sat), 2026

Target Audience: Men and women aged 15-79 nationwide

Number of Responses: 1,200

Survey Method: Individual in-home interview survey by interviewers (omnibus method)

Survey Implementation Organization: Japan Research Center Co., Ltd.

Key findings from the survey are as follows:

■ Survey Results Analysis Summary

Half of the total population is aware of sexual deepfakes, but the awareness rate is lower among women, especially those aged 15-19. The channels for recognizing sexual deepfakes are clearly divided: SNS/internet for younger generations and mass media for older generations.

② The percentage of respondents who reported knowing a minor victim of AI-generated CSAM increased from 0.3% in the previous survey to 0.7% this time, suggesting a spread of CSAM harm and increased awareness of the issue.

③ Among measures to prevent harm by minors, strengthening school education is highly rated, followed by improving social awareness. Women tend to believe that home-based measures such as creating a safe environment and parental controls are necessary.

④ Regarding responses to the CSAM issue, most people expect regulations from the government and corporations. Meanwhile, the importance of improving children's own AI literacy increased from 29.3% in the previous survey to 34.0% this time, showing the largest growth among all countermeasures. In particular, men in their 30s and women in their 30s-40s show a generally higher recognition of the necessity of AI literacy compared to the previous survey.

⑤ For legal regulations concerning real and non-real children, the most common response was "All should be prohibited, regardless of whether they are real or not" at 59.3%. This was followed by "Content including sexual depictions of real children, including processed/synthesized content (using parts of faces/bodies / including sexual deepfakes), should also be prohibited" at 18.6%. In contrast, "Only content including sexual depictions of real children should be prohibited (as per current laws)" was 10.3%, showing little change from the previous 10.8%.

■ Detailed Survey Results (Partial)

*In the survey, each question was asked after providing definitions of terms (see footnotes).

Q: Are you aware of "sexual deepfakes"?

52.7% responded "Heard of it," approximately half.

Q: Please indicate your thoughts on how to address the issue of AI-generated CSAM involving children.

Regarding responses to the issue of AI-generated CSAM involving children, "The government should create laws to regulate it" (46.8%) and "SNS and app providers should impose certain restrictions on children's use of services" (43.7%) were the most common. These were followed by "The AI-related industry should impose technical regulations" (35.8%), "Companies should restrict children's use of devices (smartphones, etc.)" (34.9%), "Children themselves should enhance their AI literacy to develop self-defense skills" (34.0%), and "The government should regulate it through an AI strategy" (33.1%).

Based on the results of this survey, the research team makes the following recommendations:

■ Recommendations

Legal Responses

1. Establish new criteria for "depictions of real children" under the Child Pornography Prohibition Act to adequately address harm from AI-generated sexual deepfakes. (e.g., legal amendment to include depictions of a child's face or part of their body in the definition of child pornography)

Institutional Responses

1. Promote awareness using information contact media appropriate for each generation. (e.g., SNS for those under 40, newspapers/mass media for those over 50)

2. Create an environment where minors can spontaneously acquire AI literacy when using AI and SNS. (e.g., proactive learning in "Life Safety Education," introduction of pop-up displays questioning the appropriateness of SNS posts/shares)

3. Foster an environment where families, schools, and society can continuously discuss online sexual harm to children. (e.g., parent-child digital safety workshops in schools and communities, awareness campaigns to prevent AI misuse (e.g., using graduation album photos))

4. Carefully understand the perception and reality of harm among teenagers, as AI-generated CSAM harm may not be visible for those aged 15-19. (e.g., conducting age-appropriate research for minors)

Technical Responses

1. Strengthen safety measures and CSAM prevention for minors' use by companies providing SNS and AI services. (e.g., introduction of warning pop-ups, development and design of AI that does not learn CSAM)

Child Fund Japan will continue its activities, including policy recommendations and further research on children and youth, to protect children's rights.

The answers to each question, analysis, recommendations, and survey questionnaire are available at:

https://www.childfund.or.jp/blog/20260330survey

■ Request for Credit Notation when Citing/Reproducing

When citing or reproducing the survey results, please include the credit: "Survey by Child Fund Japan, a Specified Non-Profit Organization."

(Footnotes)

With the development of generative AI (artificial intelligence), content that sexually objectifies children*1 (CSAM: Child Sexual Abuse Material)*2 can be easily created and uploaded to SNS for dissemination.

Furthermore, generative AI can easily create fake sexual content of children from images of "real children." It can also easily create sexual content of "fictional children" who do not exist.

Additionally, determining whether the depicted child is real has become difficult due to the advancement of generative AI.

*1 In this context, "children" refers to "individuals under 18 years of age (minors)."

*2 "Content" refers to expressions in "images, videos, audio, etc." "CSAM: Child Sexual Abuse Material" is an abbreviation for Child Sexual Abuse Material, some of which are not currently subject to legal regulation.

"Sexual deepfake" refers to content created using generative AI to create a person, often sexual content created by combining the face of a real person with a fake naked body. These also fall under "CSAM."

Apps and services that create such CSAM also exist, and children are also falling victim. They can also be used for bullying and harassment, and in some cases, the perpetrators are children.

~About Child Fund Japan~


An international cooperation NGO that has been working since 1975, primarily in Asia, to support the healthy growth of children living in poverty and the self-reliance of their families and communities. Centered in the Philippines, Nepal, and Sri Lanka, they continue to support children through sponsorship programs (programs that support children's growth through letter exchanges with local children). They also collaborate with 10 member organizations of Child Fund to deliver aid to 36 million people in 66 countries worldwide.
They engage in activities that contribute to achieving SDGs Goals 1, 3, 4, 5, and 16, with a particular focus on "child protection" through awareness and advocacy activities to achieve Goal 16.2, "End abuse, exploitation, trafficking and all forms of violence against and torture of children."

FAQ

What is the awareness level of sexual deepfakes?

According to the survey, over half (52.7%) of respondents have 'heard of' sexual deepfakes, indicating increased awareness.

Is the damage from AI-generated CSAM increasing?

Yes, the percentage of respondents who reported knowing a minor victim of CSAM increased from 0.3% in the previous survey to 0.7% in the current one.

What measures are proposed to protect children's human rights?

Recommendations include strengthening legal regulations, age-specific awareness campaigns, AI literacy education, collaboration among families, schools, and society, and enhanced safety measures by SNS and AI companies.