Other links:

Other links:

Artificial Intelligence and Fairness

The prominence of big data analytics raises problems at many levels.

  1. Their advertised predictive power encourages the collection of the kind of data that tends to compromise privacy: high frequency, non-aggregated transactional data.
  2. The datasets they are trained on often have inherent biases arising from implicit factors, selection and reporting.
  3. The algorithms themselves are also often biased. In addition, they are also opaque and it becomes difficult to establish problematic characteristics.
  4. There is an inherent, three-way trade-off between individual fairness, group fairness and calibrating the final model efficiently. 

Commentators in the West have caught onto this.

  1. Big-data analytics, by the very fact that they are designed by the privileged and often for profit, increase inequality and threaten democracy
  2. All aspects of the human experience are turned into data and sold to businesses. This new paradigm helps them influence behavior, and creates a new kind of inequality.
  3. As algorithms select, link, and analyse ever larger sets of data, they seek to transform previously private, unquantified moments of everyday life into sources of profit.

These concerns become exponentially complicated and more urgent when applied to India’s unique context.

🚫   India has unique social structures of exclusion and inequality.

🏛️   The State in India has exercised significant intent in big data analytics through digital public goods.

🌎  India has complex relationships with technology companies from other countries.

In such a context, interdisciplinary research in conjunction with 👣 Sociology/Anthropology  and 🏛️ Political Science is required to:

  1. Develop a framework to specify the minimum inherent risks of discrimination and unfair processing based on an ideal functionality of AI applications.
  2. Develop tools and techniques for reliability analysis of AI and ML applications, including design of post-deployment test-tools for measuring both reliability and utility.
  3. Develop standards for post-deployment measurement and monitoring of AI applications for safety (from bias and discrimination).
  4. Develop tools and techniques for specifying precise error models for the associated data.
Study at Ashoka

Study at Ashoka

    Sticky Button