As artificial intelligence develops rapidly, world leaders, tech giants and thousands of other representatives have flown to New Delhi to discuss how to deal with the technology at the AI Impact Summit, which kicked off on Monday.
Indian officials hope the event will spur further investment commitments, but analysts warn that with thorny issues such as job disruptions and security concerns on the agenda, these concerns could reduce the likelihood of concrete commitments from world leaders.
While frantic demand for generative AI has boosted profits for many tech companies, anxiety about it is growing. Risks to society and environment.
See also: Japan PM Takaichi under pressure to boost economy
The five-day AI Impact Summit aims to announce a “common roadmap for global AI governance and collaboration.”
It is the fourth annual gathering to discuss the problems and opportunities presented by artificial intelligence, following international conferences in Paris, Seoul and Britain’s wartime codebreaking center Bletchley.
Billed as the largest show to date, Government of India is expected to have tens of thousands of visitors from across the industry.
These include 20 national leaders and 45 ministerial delegations, who will sit alongside tech CEOs such as OpenAI’s Sam Altman and Google’s Sundar Pichai.
Indian Prime Minister Narendra Modi wrote on
Inaugurating the event late on Monday, Modi added that it was “further proof of the rapid progress our country is making in the field of science and technology” and “demonstrated the capabilities of our youth.”
Across the busy conference floor, panel discussions and roundtables were held on topics such as how artificial intelligence can make India’s dangerous roads safer and how women in South Asia are using the technology.
Three ‘Sutras’
But Amba Kak, co-executive director of the AI Now Institute, said there were questions about whether Modi and others like French President Emmanuel Macron and Brazilian President Luiz Inacio Lula da Silva would take meaningful steps to hold AI giants accountable.
“Even the industry’s voluntary commitments, which are widely publicized at these events, are largely narrow ‘self-regulatory’ frameworks that allow AI companies to continue to grade their own work,” she told AFP.
The 2023 Bletchley gathering was called the AI Security Summit, but the name of the conference changed as it grew in size and scope.
At last year’s AI Action Summit in Paris, dozens of countries signed a statement calling for efforts to regulate AI technology to make it “open” and “ethical.”
The US did not sign on, with Vice President Vance warning that “over-regulation…could stifle transformative industries that are taking off”.
The theme of the Delhi summit is “People, Progress, Planet” – known as the three “scriptures”. AI safety remains a priority, including the dangers of misinformation such as deepfakes.
Kelly Forbes, director of the Asia Pacific Institute for Artificial Intelligence, said “there is definitely room for change,” although it may not happen fast enough to prevent minors from being harmed. The group is looking at how Australia and other countries can require platforms to deal with the issue.
Artificial Intelligence for the “Public People”
Organizers emphasized that this year’s Artificial Intelligence Summit is the first to be hosted by a developing country.
India’s Ministry of Information Technology said: “This summit will shape a common vision of artificial intelligence that truly serves the masses, not just a few.”
Last year, India jumped to third place, overtaking South Korea and Japan, in an annual global artificial intelligence competitiveness ranking calculated by researchers at Stanford University.
But experts say China still has a long way to go before it can compete with the United States and China, despite its massive infrastructure plans and innovation ambitions.
main focus
The five big issues on the agenda are: job loss fears, preventing real-world harm, energy needs, moves to regulate artificial intelligence and existential fears.
AI insiders have also expressed existential concerns, arguing that the technology is moving toward so-called “general artificial intelligence” when machines’ abilities match those of humans.
Employees at OpenAI and rival startup Anthropic have resigned publicly after speaking out about the ethical implications of their technology.
Anthropic warned last week that its latest chatbot model could “deliberately support chemical weapons development and other heinous crimes in small ways.”
Researcher Eliezer Yudkowsky, author of the 2025 book If Someone Builds It, Everyone Dies: Why Superhuman Artificial Intelligence Will Kill Us All, also compared AI to the development of nuclear weapons.
- AFP Additional input and editing by Jim Pollard
Note: The images in this report were changed on February 16, 2026.


