Many worries about artificial intelligence – deepfakes, bias – translate into trust issues. By 2030, AI is expected to contribute more to the global economy than the current products of India and China combined. Developing countries have higher rates of AI exploit. Consumers understand that artificial intelligence leads to better customer service.
“We operate in many industries, and knowing our customers helps build loyalty. It involves building trust,” says Rucha Nanavati, CIO, Mahindra Group.
GenAI has been in the pilot phase, but is moving to the operational phase in 2024. “The problem with AI regulation right now is that there are no clearly defined regulations,” says Nanavati. The European Union Act on Artificial Intelligence classifies risks into unacceptable, high and constrained risk categories. India has NITI Aayog and AI Advisory, but regulations are limited.
Regulations and liability
While AI regulation can support us become responsible, it comes with challenges. They hamper innovation, risk introducing regulatory bias and vary from country to country, with gaps and ambiguities. Excessive or too inflexible regulations can also lead to unethical practices and loopholes being found to avoid them.
The risks of irresponsible exploit of AI can include unreliable algorithms and security threats, intellectual property rights violations, legal and financial penalties, and damaged reputations.
Solve together
The only way to solve this problem is to work together. Accountability is more essential than regulation. Artificial intelligence can support you make decisions, but it cannot make decisions for you. The regulations will come into force, but we must ensure that there is no reputational damage by then. “Artificial intelligence has great power, but with it comes great responsibility,” he concludes.
(With inputs from Vaishnavi J. Desai)