Loading stock data...

Women in AI: Urvashi Aneja Investigates Social Impact of Artificial Intelligence on Indian Society

At TechCrunch, we’re dedicated to shining a light on the remarkable women who are driving the AI revolution. Our latest profile highlights the work of Urvashi Aneja, founding director of Digital Futures Lab and associate fellow at Chatham House’s Asia Pacific program.

A Conversation with Urvashi Aneja

We sat down with Urvashi to discuss her journey into AI, her current research focus on algorithmic decision-making systems in India, and the pressing issues facing the field as a whole.

How did you get your start in AI?

I began my career in research and policy engagement in the humanitarian sector. For several years, I studied the use of digital technologies in protracted crises in low-resource contexts. This experience taught me that there’s a fine line between leveraging technology to improve lives and exacerbating existing problems.

Can you tell us more about your current research focus?

My work currently focuses on the impacts of algorithmic decision-making systems on marginalized communities in India. These systems often perpetuate biases and disparities, rather than addressing them. I’m working with policymakers and stakeholders to develop more equitable and transparent approaches to AI development.

What do you think are the most pressing issues facing AI today?

There are several concerns that need attention. First and foremost, we need to address the lack of diversity and inclusion in AI development. The field is dominated by a small group of individuals from affluent backgrounds, which limits the perspectives and ideas brought to the table.

Another critical issue is the environmental impact of AI. As AI systems become more complex, they require increasing amounts of energy to operate. This not only contributes to greenhouse gas emissions but also perpetuates resource extraction and waste management problems.

Lastly, we need to prioritize accountability in AI development. Many companies are pushing the boundaries of what’s possible with AI without considering the long-term consequences or ensuring that their systems are transparent and explainable.

How can investors play a more responsible role in AI development?

Investors have a significant influence on the direction of AI research and development. To promote responsible innovation, they should consider the entire life cycle of AI production – from design to deployment. This includes evaluating the environmental impacts, labor practices, and business models used by companies.

Moreover, investors can push for more rigorous evidence about the benefits of AI and demand that companies demonstrate their commitment to transparency and accountability.

What’s the best way to build responsible AI?

To build AI that serves humanity rather than perpetuating harm, we need to re-center domain knowledge in the development process. This means working closely with experts from various fields – including sociology, philosophy, and environmental science – to ensure that our systems reflect a deep understanding of human needs and values.

We also need to adopt more transparent and explainable approaches to AI development. This can be achieved by using simpler logic-based models rather than complex black-box systems, which can perpetuate biases and errors.

How can we promote diversity and inclusion in AI?

Promoting diversity and inclusion requires a multifaceted approach. First, we need to recognize the value of diverse perspectives and experiences in AI development. This means actively seeking out individuals from underrepresented groups and creating opportunities for them to contribute to the field.

We also need to address the systemic barriers that prevent women and minorities from participating in AI research and development. This includes providing mentorship programs, funding support, and advocating for policy changes that promote greater diversity and inclusion.

What’s next for you?

I’m committed to continuing my work on algorithmic decision-making systems in India and exploring ways to apply my research to real-world problems. I’m also passionate about building a more inclusive community of AI researchers and developers who prioritize social responsibility and human values.

Urvashi Aneja’s Recommendations

  • Investors: Consider the entire life cycle of AI production, from design to deployment.
  • Developers: Re-center domain knowledge in AI development and adopt more transparent and explainable approaches.
  • Researchers: Prioritize diversity and inclusion in AI research and development.
  • ** Policymakers:** Develop policies that promote transparency, accountability, and social responsibility in AI development.

About Urvashi Aneja

Urvashi Aneja is the founding director of Digital Futures Lab and an associate fellow at Chatham House’s Asia Pacific program. Her research focuses on algorithmic decision-making systems, AI governance, and digital rights.

Leave a Reply

Your email address will not be published. Required fields are marked *