Modi becomes victim of deepfake, warns AI is problematic
The Indian leaders believes artificial intelligence is a matter of “concern” and could cause chaos
Indian Prime Minister Narendra Modi on Friday flagged the use of artificial intelligence (AI) to create ‘deepfakes’ – false online images – describing them as “problematic” in a country as diverse as India, as well as a potential source of controversy.
Speaking at a political function on Friday, the PM revealed that he himself was a victim of this technology.
“[AI] can create anything. Recently, I saw a video where I was dancing garba [a form of Indian dance]. I was left astounded at its accuracy,” Modi told a gathering in Delhi, noting he had not danced garba since his school days.
The PM was referring to a video that went viral earlier this month, in which a person resembling him was seen shaking a leg with a group of women. Initially, it was thought that hackers had simply morphed the prime minister’s face onto someone else’s.
According to FACTLY, a fact-checking portal, however, the person in the video was Vikas Mahate – the prime minister’s impersonator. An inspection of Mahate’s social media appeared to suggest that the video was recorded at a ‘Navratri’ festival function in Mumbai, India’s financial capital.
“It is a matter of concern,” Modi warned. “Artificial intelligence [is problematic] in a diverse society like India. In the past, movies would come and go. Nowadays, however, if a movie carries a controversial statement, it may not be allowed to run.”
The conversation surrounding deepfakes and their potential harm has resurfaced in India after a doctored video of popular Indian actress Rashmika Mandanna went viral on social media earlier this month. The celebrity’s face was morphed onto that of British-Indian social media influencer Zara Patel, who’d posted the original video last month, triggering an outcry across the country.
Following this episode, New Delhi demanded social media companies take “decisive action” against ‘deepfakes’ and remove them within 36 hours of first receiving a report of suspicious content. These platforms could lose ‘safe harbor immunity’ and be liable to criminal and judicial proceedings if they fail to implement such measures.
You can share this story on social media:
Comments are closed.