Australia to Require AI-made Child Abuse Material to be Removed from Search Results
In a significant move to combat the proliferation of child sexual abuse material generated by AI, Australia’s internet regulator has announced that it will mandate search engines such as Google and Bing to take strict measures to prevent the dissemination of such harmful content.
This initiative comes as part of a new code drafted in collaboration with industry giants at the Australian government’s request, aimed at safeguarding the digital landscape from AI-generated child abuse material.
E-Safety Commissioner Julie Inman Grant revealed that the code, which is designed to protect the online community, imposes two crucial requirements on search engines. Firstly, it compels search engines to ensure that AI-generated child abuse material does not appear in search results. Secondly, it prohibits using generative AI to produce synthetic versions of such explicit content, commonly referred to as “deepfakes.”
“The use of generative AI has grown so quickly that I think it’s caught the whole world off guard to a certain degree,” Inman Grant acknowledged. This rapid expansion of AI technology has necessitated a reevaluation of regulatory and legal frameworks governing internet platforms.
Inman Grant pointed out that an earlier code drafted by Google (owned by Alphabet) and Bing (owned by Microsoft) did not address the emerging issue of AI-generated content. Consequently, she urged these industry giants to revise their approach. “When the biggest players in the industry announced they would integrate generative AI into their search functions, we had a draft code that was clearly no longer fit for purpose. We asked the industry to have another go,” Inman Grant emphasized.
A spokesperson for the Digital Industry Group Inc., an Australian advocacy organization representing Google and Microsoft, expressed satisfaction with the approval of the revised code. “We worked hard to reflect recent developments in relation to generative AI, codifying best practices for the industry and providing further community safeguards,” the spokesperson stated.
This development follows the regulator’s earlier initiatives to establish safety codes for various internet services, including social media platforms, smartphone applications, and equipment providers, which will take effect in late 2023. However, the regulator continues to face challenges in developing safety codes for internet storage and private messaging services, with privacy advocates worldwide voicing concerns.
Also Read: Apple Falls on Report That China Agencies Are Barring iPhone Use
As Australia takes a proactive stance in addressing the grave issue of AI-generated child abuse material, it serves as a noteworthy example of the evolving regulatory landscape surrounding internet platforms. The code aims to strike a balance between harnessing the potential of AI technology and safeguarding the well-being of online users, particularly the vulnerable, as the digital realm continues to evolve.
I am a law graduate from NLU Lucknow. I have a flair for creative writing and hence in my free time work as a freelance content writer.