The Indian government has released a new advisory for tech companies involved in the development of artificial intelligence (AI) tools. According to the advisory, these companies must obtain government approval before releasing their AI tools to the public. The approval is specifically required for tools that are deemed “unreliable” or still in the trial phase. Additionally, the advisory states that these tools should be labeled to indicate that they may provide inaccurate answers to queries.
The advisory also emphasizes the importance of ensuring that these AI tools do not pose a threat to the integrity of the electoral process, especially with general elections expected to take place in the coming months.
This advisory comes shortly after one of India’s top ministers criticized Google and its AI tool, Gemini, for its inaccurate and biased responses. Google publicly apologized for the shortcomings of Gemini and acknowledged that it may not always be reliable, particularly when it comes to current social topics.
Rajeev Chandrasekhar, India’s deputy IT minister, stressed the legal obligation of platforms to prioritize safety and trust. He stated that simply apologizing for unreliable AI tools does not exempt companies from the law.
In November, the Indian government announced its plans to introduce new regulations to combat the spread of AI-generated deepfakes ahead of the upcoming elections. This move aligns with similar actions taken by regulators in the United States.
However, the tech community in India has raised concerns about the government’s latest AI advisory. Some argue that India is a leader in the tech space and it would be detrimental if the country regulated itself out of this leadership position.
In response to this criticism, Chandrasekhar clarified that there should be legal consequences for platforms that enable or directly produce unlawful content. He emphasized that the advisory is meant to inform those deploying under-tested AI platforms on the public internet about their obligations and the potential consequences according to Indian laws.
On February 8, Microsoft announced a partnership with Indian AI startup Sarvam to bring an Indic-voice large language model to its Azure AI infrastructure. This collaboration aims to reach a larger user base in the Indian subcontinent.
In conclusion, the Indian government’s advisory to tech companies developing AI tools underscores the need for government approval prior to release. It also highlights the importance of ensuring the reliability and integrity of these tools, particularly in the context of upcoming elections. While there has been some pushback from the tech community, the government maintains that legal consequences should exist for platforms that facilitate unlawful content.