Measuring Gender and Religious Bias in the Indian Judiciary

Elliott Ash, Sam Asher, Aditi Bhowmick, Daniel Chen, Tanaya Devi, Christoph Goessmann, Paul Novosad, Bilal Siddiqi

January, 2021

Download the Bias Paper Access the Judicial Data

We thank Alison Campion, Rebecca Cai, Nikhitha Cheeti, Kritarth Jha, Romina Jafarian, Ornelie Manzambi, Chetana Sabnis, and Jonathan Tan for helpful research assistance. A special thanks to Sandeep Bhupatiraju for contributions in preparation of the data. We thank the World Bank Program on Data and Evidence for Justice Reform, the World Bank Research Support Budget, the Emergent Ventures fund at the Mercatus Center, and the UC Berkeley Center for Effective Global Action for financial support.


Muslims and women are underrepresented in the Indian judiciary. India is home to 195 million Muslims, and women represent 48 % of the population, yet:

Judge composition changes

Does the gender and religious imbalance of the courts affect judicial outcomes? Using data on over six million court cases filed under India’s criminal codes between 2010–2018 across the country, we examined whether defendants receive better judicial outcomes when their cases are heard by judges with the same gender or religious identity (male/female or Muslim/non-Muslim). We collected data on the outcomes of close to the universe of criminal cases in India from 2010-2019. We compare defendants charged under the same criminal act and section, in the same month, in the same district court; the only difference is that some are assigned to judges who match their gender or religious identity and some are not.

We find: no evidence of bias! Defendant outcomes are unaffected by whether their identity matches the judge. Table 1 shows the relationship between judicial decisions and religious group of judge and defendant. The third row shows the in-group bias in percentage points: a value of 0.002 means matching the judge's identity raises the acquittal rate by 0.2% -- i.e. not at all, and not statistically significantly.

Table 1. Magnitude of in-group bias by religion is a precisely estimated zero

Zero religion bias

Another way to get at the same question is to examine what happens to judicial outcomes when the identity composition of judges in a court changes. For example, does it matter whether a Muslim has his or her case heard in a court composed of one Muslim and one non-Muslim judge vs. a court with two non-Muslim judges?

Judge composition changes

If there is in-group bias, we would expect to see Muslims having worse outcomes and non-Muslims having better outcomes after a court transition like this. The two graphs below show how acquittal rates changed for Muslims and non-Muslims after court composition changes that decreased the share of Muslim judges in a courtroom:

Figure 1. No change in acquittal rates for Muslim and non-Muslim defendants as judge composition becomes more non-Muslim

Muslim acquittal rates after judge religion composition change

Again, we find no evidence of in-group bias on either the dimension of gender or of religion.

Why is this surprising? Because a large number of studies have found substantial in-group bias among judges, using designs very similar to our analytical design. Just focusing on studies that take advantage of effectively random assignment of defendants to judges:

  • Shayo and Zussman find that Arab and Israeli defendants get better outcomes in Israeli small claims courts when they are respectively matched to Arab and Israeli judges respectively
  • Anwar, Bayer, and Hjalmarsson (2012) find that all-white juries in the U.S. convict Black defendants 16 percentage points more often that white defendants, a gap which is entirely eliminated when there is just one Black member on the jury.
  • Knepper (2012) finds that women are more likely to win workplace sex discrimination lawsuits when their cases are heard by female judges.

One reason that all these studies have found statistically significant effects could be publication bias. If positive and null effects have equal chances of being published, then we should see no correlation between effect sizes and standard errors. Figure 2 shows the standardized effect sizes (in absolute value) and standard errors of every judicial ingroup bias study we could find that uses either random judge assignment or rotating judge cohorts to identify bias. There is quite a larger correlation between effect size and standard errors (slope=2.25, p-value=0.001). The dashed line shows the conventional 95% statistical significance threshold; this graph suggests that studies are more likely to be published or written up as bias studies if they find estimates above this line. Note the exceptions to this rule: Lim, Silveira and Snyder (2016) find no evidence of racial or gender bias among Texas judges, and Depew, Eren and Mocan (2017) find a negative in-group racial bias effect in juvenile courts in a US state.

Figure 2. Relationship Between Effect Size and Standard Error of Ingroup Bias Studies

Effect
               sizes vs standard errors

For more information, please see our paper, Measuring Gender and Religious Bias in the Indian Judiciary, or read our op-ed in the Hindustan Times.

To access the data open-sourced from this project, please click here.

To access the paper replication code and data, please visit the GitHub repository.

For a description of these data with a small set of examples of potential use cases, see our medium post describing the data release.

If you use these data, please cite our paper:

@Unpublished{     aabcdgns2021measuring,
  author        = {Ash, Elliot and Asher, Sam and Bhowmick, Aditi and Chen, Daniel and Devi, Tanaya and Goessmann, Christoph, and Novosad, Paul and Siddiqi, Bilal},
  note          = {Working Paper},
  title         = {{Measuring Gender and Religious Bias in the Indian Judiciary}},
  year          = {2021}
}
            


Contact


Development Data Lab works with governments and private firms to generate bespoke insights from our data platform or your own data. For more information, send us an email.