India stands at a critical inflection point in its urban development journey, where artificial intelligence promises transformative solutions for complex urban challenges while simultaneously threatening to amplify existing socio-economic inequalities. With over ₹1.64 lakh crores invested in the Smart Cities Mission and 94% of 8,067 projects completed as of 2024, the integration of AI in urban governance has moved beyond pilot phases to large-scale implementation across 100 Indian cities.

However, beneath this technological optimism lies a more complex reality: a 42% urban-rural digital divide, algorithmic systems operating as "black boxes" without transparency, and the absence of comprehensive AI governance frameworks. This expansion of the original analysis examines how India's pursuit of AI-driven urban solutions navigates between the promise of efficiency and the peril of deepening digital inequalities.

The scale and scope of AI integration in Indian cities

The Smart Cities Mission has emerged as the primary vehicle for AI deployment in Indian urban governance, with all 100 cities now operating Integrated Command and Control Centres (ICCCs) that utilize AI for traffic management, waste management, and public safety. The scale of this technological infrastructure is unprecedented: over 84,000 CCTV surveillance cameras have been installed, with cities like Ahmedabad deploying 6,000 cameras for traffic management through Automatic Number Plate Recognition (ANPR) and Red Light Violation Detection systems.

Smart cities mission progress: project completion status (2024)

The diversity of AI applications across Indian cities reveals both the promise and complexity of urban AI deployment. Cities like Agartala have implemented AI-driven traffic density algorithms across 22 junctions, using predictive analysis from historical data to optimize traffic flow. Bengaluru has pioneered AI-powered sentiment analysis of citizen feedback alongside traditional law enforcement applications, while Pimpri Chinchwad has integrated AI across multiple sectors from security monitoring to educational effectiveness measurement in municipal e-classrooms.

These implementations represent a significant departure from traditional urban governance models. The ₹2.05 lakh crores allocated for smart city initiatives by 2024 has enabled cities to move beyond basic digitization toward sophisticated AI-driven decision-making systems. Indore's AI-powered sewage management system, which has prevented 205 million liters per day of sewage from entering rivers, demonstrates the tangible environmental benefits of well-implemented AI systems.

The digital divide: India's technological paradox

Despite these technological advances, India faces a stark digital paradox that threatens to undermine the inclusive vision of "AI for All." Only 24% of rural households have internet access compared to 66% in urban areas, creating a fundamental barrier to equitable AI deployment. This divide extends beyond mere connectivity: while 95.15% of villages have 3G/4G access, only 14% of rural citizens actively use the internet compared to 59% in urban areas.

The implications of this digital divide for AI governance are profound. AI systems trained on urban-centric data risk perpetuating what researchers term "structural data exclusion", where rural populations, marginalized communities, and those without digital access are systematically omitted from algorithmic decision-making processes. This exclusion is not merely technical but represents a structural flaw that reflects historical inequalities now embedded in ostensibly neutral technology.

The gender dimension of this divide adds another layer of complexity. Rural women are nearly twice as disadvantaged in digital access compared to their urban counterparts, raising critical questions about how AI systems deployed in governance, healthcare, and social services account for this differential access. When AI-powered systems are used for welfare distribution, healthcare delivery, or educational services, they risk inadvertently excluding the very populations they are designed to serve.

As noted in recent research, "36% of 200 migrant women workers interviewed said they faced biometric authentication failures during pregnancy-related hospital visits," highlighting how digital governance systems can create barriers rather than opportunities for vulnerable populations.

Algorithmic governance without algorithmic transparency

One of the most pressing challenges in India's AI-driven urban governance is the prevalence of algorithmic systems that operate without transparency or accountability mechanisms. AI systems frequently function as "black boxes", making it challenging to determine decision-making processes in critical domains such as welfare distribution, policing, and urban service allocation.

This lack of transparency is particularly problematic given the scale of AI deployment. Cities are using AI for predictive policing algorithms that may unfairly target particular communities based on biased historical data, while automated systems handle everything from property tax assessment in Pune to traffic violation detection across multiple cities. Without adequate explainability mechanisms, citizens have limited recourse to understand or challenge AI-driven decisions that directly impact their lives.

The absence of mandatory algorithmic audits compounds this challenge. While NITI Aayog's framework emphasizes the need for algorithmic transparency standards and bias mitigation, actual implementation remains voluntary and inconsistent across cities. The Comptroller and Auditor General's AI Strategy Framework provides guidance for auditing AI systems in government, but its adoption and enforcement remain limited.

Recent findings reveal significant gaps in India's responsible AI governance, with "existing government frameworks lacking enforcing power" and "coverage gaps in government frameworks for Responsible AI". This regulatory vacuum means that even as cities deploy sophisticated AI systems, there are inadequate mechanisms to ensure these technologies serve public interests equitably.

Data governance challenges and privacy concerns

The rapid deployment of AI in Indian cities has occurred without robust data protection frameworks. The Digital Personal Data Protection Act (DPDPA) 2023 was passed, but implementation has been delayed beyond the initial 2024 timeline, leaving AI systems operating in a regulatory vacuum regarding data privacy and protection.

This regulatory gap has significant implications for urban AI deployment. Cities are collecting vast amounts of personal data through IoT sensors, surveillance systems, and citizen service applications but lack standardized protocols for data quality, privacy protection, and ethical use. The 6,000 surveillance cameras in Ahmedabad and similar systems across other cities raise concerns about mass surveillance and potential privacy violations without adequate legal safeguards.

The pervasive nature of data collection in smart cities creates what researchers describe as "operational vulnerabilities", where "data within smart city applications should be able to withstand modification, disruption, inspection, unauthorised access, disclosure and annihilation". However, Indian cities often lack the basic security infrastructure to ensure these protections.

The inter-organisational gaps and the deficiency of clear communication between planners and technical teams further complicate data governance. Cities often implement AI systems without fully understanding their implications for privacy, bias, or community impact, leading to what researchers describe as a technocracy that sidelines people's voices.

Surveillance infrastructure and facial recognition technology

India's urban AI deployment has witnessed an unprecedented expansion in surveillance infrastructure, raising critical concerns about privacy, civil liberties, and the potential for discriminatory enforcement. The government has deployed 126 Facial Recognition Technology (FRT) systems across the country as of June 2023, with spending of ₹1,499.41 crores.

The scope of this surveillance infrastructure is staggering. Delhi's "Safe City Project" includes deployment of facial recognition systems on identified CCTV cameras to monitor suspected persons and criminals, while the Ministry of Home Affairs has approved similar systems for seven major railway stations, including Mumbai's CSMT, New Delhi, Bengaluru, Chennai, Howrah, Ahmedabad, and Pune.

These systems are integrated with the National Database on Sexual Offenders, which contains over 2 million profiles, enabling automated identification and tracking of individuals in public spaces. While authorities frame this as enhancing public safety, particularly for women, the implementation occurs without comprehensive legal frameworks governing the use of such invasive technologies.

The Delhi Police's C4I has a facial recognition software database of around 350,000 criminals, and plans to install 10,000 additional CCTV cameras under the Safe City Project, all equipped with dynamic facial recognition capabilities. This represents a quantum leap in the state's ability to monitor and track individuals in public spaces.

However, research reveals significant concerns about the accuracy and fairness of these systems. FRT demonstrates a high error rate in identifying individuals from marginalized communities, increasing the likelihood of false arrests and accusations. The technology "disproportionately impacts marginalised communities" and "reinforces bias rather than ensuring fair law enforcement".

The bias challenge: when algorithms amplify inequality

Algorithmic bias represents one of the most significant risks in India's AI-driven urban governance. AI models trained on biased datasets can produce discriminatory results in critical areas such as social services, lending decisions, and law enforcement. In a country as diverse as India, where cities contain significant informal economies, varied religious and cultural practices, and complex social hierarchies, standardized AI solutions risk perpetuating or amplifying existing inequalities.

The manifestations of algorithmic bias in Indian governance are well-documented and deeply concerning. The Aadhaar-based digital identity systems that are vital to Indian welfare programmes frequently exclude marginalised people, limiting their access to critical services. Research documents cases where "Santoshi Kumari died from malnutrition when her family's ration card was withdrawn because it was not linked to Aadhaar" and "Shrimati Devi's pension of Rs. 1000 per month was incorrectly transferred to someone else owing to a banking system error".

These are not isolated incidents but systemic patterns that reveal how AI systems can transform into "algorithms of oppression". The challenge is compounded by the lack of diversity in development teams creating these AI systems. When AI systems are designed without adequate representation of India's demographic diversity, they may embed implicit biases that systematically disadvantage marginalized communities.

In welfare distribution, AI systems have wrongfully removed thousands of legitimate beneficiaries from social assistance schemes. The Samagra Vedika system in Telangana, which uses AI for welfare delivery, has been criticized for "erroneously attributing data on assets such as car ownership to individuals, leading to them no longer being eligible for social assistance schemes". This represents what Amnesty International describes as a significant human rights concern around people's rights to "social security, privacy, redress and remedy, equality and non-discrimination".

Welfare distribution and the algorithmic denial of rights

The deployment of AI in welfare distribution represents perhaps the most critical area where algorithmic bias intersects with basic human rights. Recent research reveals that "the deployment of AI in welfare benefit allocation accelerates decision-making but has led to unfair denials and false fraud accusations". This acceleration prioritises efficiency over accuracy, with devastating consequences for vulnerable populations.

In India, the digitisation of welfare systems through platforms like Aadhaar has created what researchers term a "digital poorhouse", where technology that promises inclusion actually creates new forms of exclusion. The Al-Jazeera investigation of Telangana's Samagra Vedika system revealed how "people had lost access to crucial social protection schemes after the introduction of a digitalised system" that uses algorithmic decision-making to assess eligibility.

The technical design of these systems raises profound questions about accountability and redress. Entity Resolution technology, provided by private companies like Posidex Technologies, operates as a black box system that "collates data on individuals from multiple sources to assess their eligibility for social security schemes". When these systems make errors—attributing incorrect asset information or failing to recognise legitimate beneficiaries—citizens have limited recourse for appeal or correction.

Research documents that "out of 12 cases of starvation deaths identified, seven are related to Aadhaar in one way or another", highlighting the life-and-death consequences of algorithmic failures in welfare distribution. These systems don't just deny benefits; they can deny the right to life itself when vulnerable populations lose access to food, healthcare, and social support.

The gender and caste dimensions of these exclusions are particularly concerning. Aadhaar-linked benefit schemes use computerised screening to avoid leakages, but worsen the vulnerabilities of historically oppressed groups. Technical failures and authentication concerns especially exclude Dalits and women, while algorithmic biases built into digital governance disproportionately advantage urban and upper-caste groups.

Regulatory frameworks: the policy-implementation gap

India's approach to AI regulation reflects a fundamental tension between fostering innovation and ensuring responsible deployment. The Ministry of Electronics and Information Technology (MeitY) favours a "light touch approach", while other government agencies push for more comprehensive regulation. This fragmentation has resulted in a policy-implementation gap where high-level principles exist without enforceable mechanisms.

NITI Aayog's Responsible AI principles, established in 2021, emphasise transparency, accountability, and fairness but lack enforcing power. The organisation's "AI for All" strategy provides broad guidance but fails to address specific implementation challenges in urban governance contexts. As researchers note, even as we reach the end of 2024, no comprehensive framework has been put in place despite years of policy development.

The regulatory vacuum is particularly evident in urban governance applications. While cities deploy AI for critical functions like traffic management, waste distribution, and public safety, they operate without sector-specific guidelines for algorithmic accountability, bias testing, or community participation in AI system design. The absence of mandatory impact assessments for AI systems in urban governance means cities can implement these technologies without systematically evaluating their social, economic, or environmental implications.

Current analysis reveals that India's regulatory approach suffers from "lack of technical expertise, failure to issue clear and timely regulatory guidance, lack of investigative powers, ineffective or inconsistent enforcement, and lack of grievance redressal mechanisms". These capacity constraints prevent effective implementation of new AI regulations even when frameworks exist on paper.

To address these gaps, experts recommend establishing an "AI Safety Institute" to develop state capacity in foundational research, safety and testing, training and awareness, and cross-border collaboration on AI governance. However, such institutional innovations remain proposals rather than implemented solutions.

International perspectives and India's regulatory position

India's approach to AI regulation differs significantly from international models, particularly the European Union's AI Act and China's more restrictive frameworks. While the EU emphasises risk-based regulation with strict requirements for high-risk AI systems, India has pursued a principles-based approach that prioritises innovation over prescriptive regulation.

The EU's AI Act provides a comprehensive framework that classifies AI systems based on risk levels, with "malicious uses, algorithmic discrimination, transparency failures, systemic risks, and loss of control" identified as key regulatory concerns. In contrast, India's regulatory approach remains fragmented, with "a constellation of legislations responding to technology-related issues" rather than comprehensive AI-specific regulation.

This approach has both advantages and limitations. The flexibility allows for rapid deployment and experimentation in urban governance applications, as evidenced by the diverse AI implementations across Smart Cities. However, it also creates risks of inconsistent implementation and inadequate protection for citizens' rights and interests.

Global AI governance rankings place India at the 32nd position in the Government AI Readiness Index, indicating significant room for improvement in regulatory frameworks and institutional capacity. Countries with higher rankings typically have more comprehensive legal frameworks, stronger institutions, and better mechanisms for public participation in AI governance decisions.

Recent policy analysis suggests that India should consider "gap analysis to identify areas where new regulations are required, encourage self-regulation as a starting point, and empower government to address AI risks through legal provisions." However, implementing these recommendations requires political will and institutional capacity that remain underdeveloped.

The human cost of automated governance

The shift toward AI-driven urban governance has profound implications for employment and human agency in public administration. Manufacturing and IT services sectors, which account for 10 million and 3 million jobs respectively, are particularly impacted by AI automation. In urban governance, this translates to reduced human discretion in decision-making processes and potential displacement of administrative roles.

More concerning is the erosion of human oversight in critical government functions. When AI systems handle welfare distribution, housing allocation, or public service delivery, the absence of meaningful human review can lead to errors that significantly impact citizens' lives. The challenge of appealing algorithmic decisions becomes particularly acute when systems lack explainability or when human operators cannot override AI recommendations.

Research reveals that "AI does not simply restrict or enhance discretion but redistributes it across institutional levels", fundamentally altering the nature of public administration. While AI may "simultaneously strengthen managerial oversight, enhance decision-making consistency, and improve operational efficiency", it also introduces "new risks, such as data bias, algorithmic opacity, and fragmented responsibility across actors".

The digital divide compounds these challenges by creating differential access to AI-mediated services. Citizens with limited digital literacy or access may find themselves increasingly excluded from government services that assume technological proficiency. This creates a two-tier system where digital access determines the quality and accessibility of public services.

Studies show that "claimants are less willing to accept AI in welfare systems" compared to the general population, raising concerns that "using aggregate data for calibration could misalign policies with the preferences of those most affected". This disconnect between policy preferences and affected populations' interests suggests that democratic legitimacy requires more nuanced approaches to AI deployment in public services.

Environmental and sustainability implications

The environmental implications of large-scale AI deployment in urban governance remain largely unaddressed in Indian policy frameworks. AI systems require significant computational resources and energy, raising questions about the sustainability of widespread AI deployment in cities already facing resource constraints.

The NITI Aayog's National Strategy discusses environmental sustainability only as a use case of AI, failing to acknowledge the environmental costs of AI infrastructure. Cities deploying thousands of sensors, cameras, and data processing centres must account for the energy consumption and electronic waste generated by these systems.

However, there are positive examples of AI contributing to environmental sustainability. Visakhapatnam's floating solar plant saved $0.28 million and prevented 3,000+ tonnes of CO₂ emissions, while Indore's AI-powered sewage management demonstrates how intelligent systems can address environmental challenges. The key is ensuring that environmental considerations are integrated into AI system design from the outset rather than treated as an afterthought.

The scale of surveillance infrastructure raises additional environmental concerns. 84,000 CCTV cameras installed across smart cities represent significant energy consumption and electronic waste generation. Without comprehensive lifecycle assessments, cities risk creating environmental costs that outweigh the benefits of AI-driven efficiency gains.

Community participation and democratic governance

The integration of AI in urban governance raises fundamental questions about democratic participation and community voice in technological decision-making. Traditional governance models assume opportunities for public consultation and feedback, but AI systems often operate at speeds and scales that preclude meaningful community input.

People's voices are often sidelined by technocracy, making AI governance systems "not so easy to operate" for ordinary citizens. This technocratic approach risks undermining the participatory governance principles that are essential for democratic legitimacy, particularly in a diverse country like India, where "one size doesn't fit all".

The Smart Cities Mission guidelines emphasize citizen participation, requiring that proposals be "citizen-driven from the beginning, achieved through citizen consultations". Pune's smart city proposal claimed to engage 300,000 families with 1.2 million inputs and cover 50% of households. However, the effectiveness of this engagement remains questionable when "solutions or votes are invited for some of the proposed solutions by the ULB" rather than genuine co-creation.

Research reveals significant challenges in meaningful citizen engagement. Digital divide and digital illiteracy are serious challenges for smart city participation. Additionally, "bureaucratic control, political intervention, and a top-down approach characterise Indian city governments, leading to more ceremonial public participation."

The challenge is developing mechanisms for community participation in AI system design, deployment, and monitoring. This requires not only technical solutions like explainable AI but also institutional innovations that create space for citizen feedback, community oversight, and democratic accountability in algorithmic governance.

Recent innovations show promise for enhanced citizen engagement. AI-powered tools can analyse vast datasets to understand diverse needs and preferences of citizens, while "chatbots and virtual assistants provide tailored information and updates about city projects, services, and events." However, these tools must be designed to bridge rather than deepen digital divides.

Sector-specific challenges and opportunities

Different sectors of urban governance present unique challenges and opportunities for AI deployment. In healthcare, AI systems show promise for disease surveillance, telemedicine, and resource allocation but raise concerns about privacy, access, and quality of care for digitally excluded populations. The Aarogya Setu app during COVID-19 demonstrated both the potential reach and limitations of AI-powered health interventions.

Transportation and mobility represent areas of significant AI success, with cities like Bangalore reducing passenger wait times by 15% through AI-based route optimisation. However, these systems risk optimising for users with smartphone access while potentially disadvantaging those who rely on informal transportation networks.

Public safety and policing applications raise the most significant concerns about bias and rights violations. AI-powered facial recognition, crime prediction, and surveillance systems must operate under clear legal safeguards to prevent mass surveillance, wrongful profiling, and human rights violations. The deployment of 84,000 CCTV cameras across smart cities requires robust oversight mechanisms to ensure they enhance rather than undermine public safety.

In education, AI applications in municipal e-classrooms show promise for personalised learning and effectiveness measurement. However, the digital divide means that AI-enhanced education may become another mechanism for inequality rather than empowerment.

Waste management represents one of the most successful AI applications, with systems like Indore's sewage prevention AI demonstrating clear environmental benefits. These applications face fewer bias concerns and generate measurable positive outcomes, suggesting that environmental applications may offer the most promising path for inclusive AI deployment.

Economic Implications and digital divides

The economic implications of AI deployment in urban governance extend beyond implementation costs to questions of digital equity and economic inclusion. While cities invest heavily in AI infrastructure, the benefits may accrue primarily to digitally connected and educated populations, potentially exacerbating existing economic inequalities.

The ₹1.64 lakh crores invested in Smart Cities represents a significant public investment that should benefit all citizens, not just those with digital access. However, current implementation patterns suggest that AI-driven services may create "smart enclaves" within cities, privileging certain neighbourhoods or communities while leaving others behind.

Economic modelling suggests that effective AI governance requires complementary investments in digital literacy, infrastructure access, and social safety nets to ensure that technological advancement contributes to rather than detracts from economic inclusion. Cities must develop strategies to bridge the digital divide while deploying AI technologies.

The rural internet subscription rate of 398.35 million out of a total of 954.4 million internet subscribers indicates that rural areas account for approximately 42% of internet users despite comprising 65% of India's population. This disparity suggests that AI systems trained primarily on urban data may systematically under-represent rural perspectives and needs.

Research reveals that "algorithms could learn that most people in a particular job role are male and therefore favour men in job applications," highlighting how AI systems can perpetuate economic discrimination across gender lines. Similar patterns emerge across caste, religious, and regional dimensions, potentially embedding historical inequalities in automated decision-making systems.

image host Digital divide in India: urban vs rural internet access and usage (2024).

Privacy, surveillance, and civil liberties

The expansion of AI-powered surveillance infrastructure in Indian cities raises fundamental questions about the balance between security and civil liberties. The deployment of 126 FRT systems across the country represents "a quantum leap in the state's ability to identify and monitor individuals", fundamentally altering the relationship between citizens and the state.

Research reveals concerning patterns in how these systems operate. Facial recognition is uniquely intrusive: real-time, automated identification at scale, erasing public anonymity. When combined with other data streams, authorities can build detailed profiles of individuals without their knowledge or consent.

The National Database on Sexual Offenders integration with railway station surveillance demonstrates both the promise and peril of AI-powered safety systems. While the stated goal of protecting women from sexual violence is laudable, the infrastructure created can easily be repurposed for broader population surveillance and control.

Privacy concerns are compounded by the absence of comprehensive data protection laws. While the Digital Personal Data Protection Act 2023 exists on paper, its delayed implementation means that AI surveillance systems operate without adequate legal safeguards. Citizens have limited recourse when surveillance systems make errors or are misused.

The impact on vulnerable populations is particularly concerning. "Surveillance cameras in public spaces contribute to a sense of being constantly monitored," while "facial recognition technology is used in public areas for law enforcement without clear regulations." For marginalised communities already subject to discriminatory policing, AI-powered surveillance can amplify existing patterns of harassment and intimidation.

Recommendations for inclusive AI governance

Based on this comprehensive analysis, several critical recommendations emerge for more inclusive and accountable AI governance in Indian cities:

Mandatory Algorithmic Impact Assessments: Cities should be required to conduct comprehensive impact assessments before deploying AI systems, evaluating potential effects on different communities, privacy implications, and bias risks. These assessments should include meaningful community consultation and public disclosure of findings.

Establishment of Algorithmic Accountability Offices: Cities need dedicated offices or officials responsible for overseeing AI system deployment, conducting regular audits, and ensuring compliance with ethical AI principles. These offices should have both technical expertise and community engagement capabilities.

Data Governance Frameworks: Implementation of robust data governance frameworks that protect privacy, ensure data quality, and establish clear protocols for data sharing and use. This includes accelerating the implementation of the Digital Personal Data Protection Act and developing city-specific data governance policies.

Community Participation Mechanisms: Development of institutional mechanisms that enable meaningful community participation in AI system design, deployment, and monitoring. This could include citizen advisory committees, public hearings, and community feedback systems for AI governance decisions.

Bridging the Digital Divide: Targeted investments in digital infrastructure, literacy programmes, and access initiatives to ensure that AI-mediated services are accessible to all citizens regardless of their digital capabilities. This includes support for voice-based interfaces, multilingual systems, and offline service alternatives.

Transparency and Explainability Requirements: Mandatory requirements for AI system transparency, explainability, and auditability, particularly for systems that directly impact citizens' access to services or rights. This includes public documentation of AI system purposes, data sources, and decision-making logic.

Bias Detection and Mitigation: Implementation of systematic bias detection mechanisms that test AI systems against India's diverse population demographics. This includes rigorous dataset scrutiny and regulatory sandboxes for stress-testing algorithms against entrenched inequalities.

Human Oversight Requirements: Ensuring that critical AI systems maintain meaningful human oversight with the ability to override algorithmic decisions. This is particularly important for welfare distribution, healthcare, and criminal justice applications.

Legal Framework Development: Creation of comprehensive AI regulation that addresses the specific risks identified in Indian contexts while maintaining innovation potential. This should include sector-specific guidelines for urban governance applications.

Capacity Building: Investment in technical expertise and institutional capacity for AI governance, including training for public officials, judges, and civil society organisations on algorithmic accountability.

The path forward: building inclusive smart cities

India's experience with AI in urban governance offers valuable lessons for other developing countries grappling with similar challenges. The key insight is that technological innovation and social inclusion are not automatically compatible—they require deliberate policy choices, institutional innovations, and sustained commitment to equitable development.

The Smart Cities Mission's achievements in infrastructure deployment and service digitisation provide a foundation for more inclusive AI governance. However, realising the vision of "AI for All" requires moving beyond technocratic approaches toward participatory, transparent, and accountable AI systems that serve all citizens rather than privileging the digitally connected.

The window of opportunity for shaping inclusive AI governance remains open, but it is narrowing as systems become more entrenched and harder to modify. Cities, policymakers, and civil society must act now to ensure that India's urban AI future serves democratic values and social inclusion rather than reinforcing existing inequalities.

The evidence presented in this analysis reveals a troubling pattern: AI systems designed to enhance governance efficiency and service delivery often reproduce and amplify existing social inequalities. From welfare distribution systems that deny food to vulnerable families to surveillance infrastructure that disproportionately targets marginalised communities, India's AI-driven urban governance risks creating "digital apartheid" rather than inclusive development.

Future research and policy development should focus on developing context-specific AI governance frameworks that account for India's linguistic, cultural, and socioeconomic diversity while building institutional capacity for ethical AI oversight. The goal is not to slow technological progress but to ensure it contributes to building more equitable and sustainable urban futures for all Indians.

The promise of AI in urban governance remains significant, but realizing this promise requires acknowledging and addressing the perils of algorithmic bias, digital exclusion, and technocratic governance. Only through inclusive, transparent, and accountable AI governance can India ensure that its smart cities truly serve all citizens in the digital age, transforming the current paradigm from "AI for some" to genuine "AI for All."

This transformation demands not just technical solutions but fundamental changes in how we conceptualise the relationship between technology, governance, and democratic participation. As India continues to lead global discussions on responsible AI, its urban governance experiments will shape not only domestic development outcomes but also international norms for AI deployment in diverse, developing societies. The choices made today will determine whether AI becomes a tool for liberation or oppression in the cities of tomorrow.

References

Basu, S. (2021). Algorithmic governance in Indian smart cities: A critical analysis. Journal of Urban Planning and Development.
Mehta, R., & Sharma, P. (2020). Big Data, IoT, and social media for urban governance in India. International Journal of Digital Governance, 5(2), 45–62.
Narayan, V., & Joshi, S. (2022). Multilayered AI frameworks for inclusive governance in India. AI & Society, 37(1), 123–140.
NITI Aayog. (2021). Responsible AI #AIForAll: National strategy for artificial intelligence. Government of India.
PIB. (2024, February 4). 10 years of Smart Cities Mission: Achievements and challenges. Press Information Bureau, Government of India.
Roy, A., & Singh, T. (2021). Transparency challenges in AI-based urban governance in India. Indian Journal of Public Administration, 67(3), 230–247.
Sinha, A. (2023). Standardized AI solutions and informal economies: Limits of smart city applications in India. Urban Studies Review, 10(1), 88–105.
World Bank. (2024). Digital divide in India: Urban and rural disparities in internet access. World Bank Group.