What is Judges

The judiciary is a branch of government in which judicial power is vested. This means that the judiciary has the power to try all cases involving the government and with the administration of justice within its jurisdiction. It interprets and ensures the execution of law, decide punishments for law offenders, and upholding justice. Therefore, it is one of the most important structures in bolstering justice for societies around the world.

Because of its importance, it is essential for us to review whether the judicial system is truly fair. Traditionally, people believe in legal formalism, a theory that states legal rules stand separate from other social and political institutions. According to this theory, once lawmakers produce rules, judges apply them to the facts of a case without regard to social interests and public policy. This idea was prevalent because after all, judges are rigorously trained and are expected to be competent with their jobs. However, as more and more evidence suggest, this may not be the case.

A newer theory, legal realism, suggests that all law derives from prevailing social interests and public policy. According to this theory, judges consider not only abstract rules, but also social interests and public policy when deciding a case. For example, justices of the Supreme Court are often accused of “judicial activism”, basing their decisions not on the written constitution itself, but “interpret” the constitution with their political positions, so and more.

If decisions by judges can be influenced by social, cultural, and economical factors, then is it safe to assume that the judge’s decisions can be influenced by a myriad of factors, some of them so subtle that are neglected because they can’t be dealt with consciously? Researchers believe so, that judge’s decisions are influenced by psychological heuristics and biases. There are various papers showing clear judicial biases in laboratory environments, such as the influence of anchoring, framing, hindsight bias, representative heuristics, egocentric bias, snap judgments and inattention.

In this paper, we will address many of the psychological biases that affect judicial decisions. Firstly, we will explore psychological heuristics in judicial decision making. Next, we will talk about the profound role of mental stress in sentencing. After that, we will introduce the concept of implicit discrimination. Last but not least, we will state more irrelevant legal factors in cases. After describing these phenomena and their adverse effects, we will propose policy changes to the judicial system that limit or extinguish these unjust errors.

We shall begin with psychological heuristics. In psychology, heuristics are rules which people often use to form judgments and make decisions. They are mental shortcuts that usually involve focusing on one aspect of a complex problem and ignoring others. In some cases, heuristics causes people to draw systematically inaccurate inferences, which may, in turn, generate cognitive illusions of judgment. Although judges’ ability, well, to judge, is among the best of us, and after years of judges’ rigorous training, people would assume that judges are immune to these heuristics and biases, research has shown that no professional is immune since these judgments occur at the psychological level, not on the conscious level. Another common misassumption is that decisions are made by the parts of brain that deal with numbers, reasoning, and calculation, but in fact decision making is influenced by sensations, emotions, and the subconscious as well. And that, is where heuristics come in.

Let’s begin with effects of numbers. Anchoring is a cognitive bias when the subject’s decision or estimate is influenced by an initial piece of evidence or information offered. The information offered is called the “anchor”. For example, when we tell subject group A to estimate the price of one product after we give them the price of a similar or related product (the number here is the “anchor”, regardless of the information we provided was true or not), their estimate is very likely to be similar to the number we provided.

Control group B, without the anchor, would wander away from that number. Judges can be influenced in a similar way. In one study, when judges were presented by the prosecutor with two different sentences for a rape case (12 months or 34 months), it is shown that regardless of other factors, the judges, when presented the 34-month sentences, punished the alleged criminals more harshly. The other group, when judges were presented with shorter sentences, the judges act more leniently to the criminal. This may be because the judges tend to perceive information that was consistent with their prior knowledge, in this case, the suggested sentence time by the prosecutor.

Later, the experimenters found that this anchoring effect doesn’t necessarily have to come from the prosecutor. Even journalists, who are considered irrelevant to the judicial process, can anchor and influence judicial decisions. In another study, experimenters provided a hypothetical case on a personal injury lawsuit involving a struck car and an injured plaintiff. Half of the judges were provided a motion filed by the defendant to have the case dismissed for failing to meet the jurisdictional minimum in a diversity suit ($75,000—the anchor—), the other half was not. When asked the question “how much would you award the plaintiff in compensatory damages”, the estimates were averaged with $1,249,000 and $882,000, a statically significant difference. These experiments show that when it comes to numbers, judges don’t “calculate” them based on written law, but instead rely heavily on impressions of numbers that they receive.

Then there is hindsight bias. Hindsight bias is the tendency for people to perceive events that have already occurred as having been more predictable than actually were before the event took place. For example, “I told you so” is a common phrase that is magically forgotten when our predictions go the wrong way. Judges often evaluate the outcome in hindsight. For example, in one case a physician was accused of malpractice because he failed to detect a tiny tumor in early chest radiography. The tumor got bigger and the patient died as a result.

The physician was found guilty after another radiologist, who claims that the radiographs after the tumor was found testified that the tumor could have been detected in early radiography, not considering the case that the tumor may be way harder to detect before. Another study shows that judges who were informed that a psychiatric patient became violent were much more likely to find the patient’s therapist negligent than those who did not receive information about the outcome and its severity.

In one case, the judges were separated into three different groups, each group was told a different outcome to a hypothetical case, and the judges were asked to select the most likely outcome if they were to decide. Overall, the sum of the percentage of judges in each of the three conditions who selected the outcome that they were provided as the “most likely to have occurred” was 172 percent, whereas it would have been 100 percent if learning the outcome had no effect on the judges. This evidence shows that judges are ordinary people who would overestimate their abilities on projecting the result of events.

Overestimation of our own abilities is a prevalent human practice, which leads us to egocentric bias. Egocentric bias states that everyone believes that they are better than the average person. When betting, everyone believes that their bet’s chances are higher than 50%. A study shows that 88% of judges report that they are better than median judges in identifying the relative rates of being overturned on appeals. Judges believe their ego in literally every noticeable aspect. For example, a study found that 97% of the US judges believe they are above average in their ability to “avoid racial prejudice in decision making”. However, this is clearly not true. Discrimination, especially implicit discrimination, which we will cover later, is still a huge issue in courts.

A very similar heuristic to Hindsight bias is Confirmation. Confirmation is the tendency when people have preconceptions or hypothesis about a given issue, they tend to favor information that corresponds with their prior beliefs and disregard evidence pointing to the contrary. For example, members from a political party are more likely to rate news outlets that share their ideological view as more “convincing.”. We have seen liberals attack Fox News and conservatives calling CNN “fake news” for long enough. In a study conducted by Eric Rassin, participants were asked to review 20 pieces of information and to rate the degree these inculpated or exculpated the prime subject.

The experiment group, however, was also presented an alternative subject that may also be involved in the case later in the case. In the end, all participants rated the pieces of evidence similarly, showing that judges often fail to consider alternative scenarios because they have already made up their minds on the “criminal”, who was essentially just a subject. In another study, the researchers discovered that after judges decided to press charges, they were less likely to ask for additional investigation, and suggested more guilt confirming investigation instead. The same group found that pretrial detentions influence judges’ perception of the strength of the evidence, thus the judges were more likely to convict in cases where the suspect is already detained. Confirmation speeds up the judicial process, but sometimes at the cost of innocent people.

Availability is a form of psychological heuristic that allows the individual to retrieve immediate examples from memory when recalling information. For example, when subjects are asked whether accident by car or accident by plane is more frequent, subjects will more likely think about planes because the hazardous accidents by plane are more scaring and notable in the mind. In 1989, researchers found that jurors would believe that witnesses are more deceptive if they told the truth first then told lies, and will think that witnesses are less deceptive if they first told lies and then told the truth. This is because of the availability heuristic when subjects lie after telling the truth, the image “this subject tells lies” remains in juror’s minds because it was more recent, and it will be more likely recalled by the jurors when they consider the case.

After discussing empirical data on all these heuristics and biases, what is the real reason behind all of this? Why do judges psychologically employ these techniques that fault their decisions? Studies have found that this is primarily because judges are expected to master a vast variety of subjects, and have huge workloads that, well, are just too much. Cognitive scientist Herbert A. Simon proposed that human judgments are limited by available information, time constraints, and cognitive limitations, calling this bounded rationality. Heuristics are important in overcoming this as evidence suggests. Judges do not have the mental capacity to think straight on all the cases they receive.

For example, according to one study, the likelihood of a favorable ruling is greater at the very beginning of the workday (when the judge is not mentally stressed), and drops to near zero after working for long hours, and returns back to 65% after taking a break with food (which increases the judges glucose levels that deals with mental stress). This is because as judges work, they become more and more stressed, and their mental abilities have to resort to shortcuts. Interestingly, the study found that judges and lawyers are not conscious of this effect, even when the favorable ruling likelihood gap is so large. Thus, high glucose levels, rest, and positive emotional states can influence judges to judge more favorably, and this may present a view on how to deal with lessening heuristic impacts.

Apart from psychological heuristics, there are many other ways that a judge’s decision can be influenced by unwanted, internal factors. One of these is implicit discrimination. Different from “taste-based” and “statistical” discrimination that consciously discriminate, implicit discrimination is defined as discrimination that may be unintentional and outside of the discriminator’s awareness. Normally, implicit discrimination is targeted at demographics such as race and gender. Implicit discrimination in a non-legal setting can be found everywhere.

An Ian Ayres finding of African-American cab drivers receiving lower tips than white cab drivers is because the tipping decision is often made quickly. These first, quick, implications can mean a lot. In a Joshua Corell finding (2002) used a videogame to show that subjects were quicker at deciding not to shoot an unarmed white target versus an unarmed black target, even though both targets were armed at equal rates in the context of this game. This is an application of the “mental stress” idea we introduced in the previous section. Humans simply cannot exhaust all their mental efforts on consciously working on every single detail of life. Implicit discrimination may also be applied to judicial settings.

A study assigned law students, economic students, practicing lawyers and judges to hypothetical cases, in which the only different aspect of the case is the race of the defendant. The researchers found that evaluators are harsher towards defendants of their own race during the guilt-innocence decision, but are more lenient towards defendants of their own race in the sentencing phase. The exact reasons for this phenomenon are unknown. But what is important is that regarding overall racial bias, the study found that minorities are more likely to get convicted. Evidence also suggests that the source of the bias may be deep-rooted. Even after merging a small sample of judges and prosecutors together, the results are very similar under groupthink. The subjects are not aware of this implicit discrimination. This experiment demonstrates implicit discrimination based on race.

There may also be implicit discrimination when considering gender. Nine out of ten female law professionals responding to a State Bar survey reported being the target of at least one incident of gender discrimination in the courtroom during the preceding three years. The study also found that women litigants often experience hostile, demeaning, or condescending treatment from attorneys and sometimes from judges, and that judges rarely reprimand counsel or court personnel whose behavior or comments exhibit gender bias. Judges often look less alert and attentive, stop note taking, or have a bored expression when a woman speaks, showing that they feel women’s presentation are not as important as those of men.

Some researchers state that women are treated “chivalrously” by the legal system, shown in evidence that women are held less in custody often than men, and are treated less harsh in sentences. Other researchers assert that women are harmed because of their feminine characteristics. These researchers believe that the sentences women receive are related to gender roles. If the offense fits women’s gender roles, they will be treated leniently (such as in property crime cases), and in cases where women play the “role” of a male, such as homicide, they are treated way harshly. In a Miller study published in the journal Social Psychological and Personality Science, in custody cases, the more judges in the case are bought into traditional gender roles, the more likely they were to award greater custody time to mothers over fathers by an average of almost a month every year.

The same study found that in employment discrimination cases, men’s cases have a higher chance to be dismissed by the judge, and not even go to trial, while women’s outcomes are more affected by the ideologies of the judge on the liberal-conservative axis. These results show that our implicit attitude towards women and the modern feminist movement does impact our judgement. And not only are judges affected. Discriminatory treatment of attorneys affects their credibility and might have dire consequences on their ability to advocate effectively for clients.

The reason behind implicit discrimination may be simpler than we think, and that is the short “first-impression” time individuals get. A study found that anticipation caused more positive implicit attitudes, as shown in participants who were told beforehand that they would be working with black individuals. This is consistent with the result of the tipping experiment. When participants only receive a short amount of time when deciding, they would rely on first impressions, and often these impressions would generate from race and gender that are the first distinguishable characteristics upon encounter.

Last but not least, another blow to legal formalism is the discovery of more irrelevant factors in cases. There are numerous factors, and they vary widely.

For example, according to a Berdejo and Chen study in 2016, U.S federal appeals court judges become more politicized before elections and more unified during wars. This is contrary to the belief that courts are independent of the political process and therefore should be nonpartisan. Another study found that refugee asylum judges are 2 percent points more likely to deny asylum to refugees if their previous decision granted asylum. There is no valid reason that a fair law system could allow independent cases to influence each other like this.

Overall, irrelevant factors that influence cases are numerous and occur in every possible way. A Chen and Eagle study in 2016 found that legally irrelevant factors such as time of day, weather, number of recent grants, what’s in the news, and the date of decision all influences judicial decisions. Even judicial rules that are supposed to be governed by law are ignored. Although the United States judicial system asks jurors to ignore evidence obtained illegally, a study conducted by Doob and Kirshenbaum shows that jurors are more likely to rate a defendant as guilty when they were exposed to prior criminal-record information that was intended should only be used to determine credibility rather than as an indicator of guilt.

We have pointed out all these phenomena that creates bias and unfairness in the judicial decision-making process. Whether it is psychological heuristics and biases, implicit discrimination, or irrelevant factors, the issue is extremely varied. But the question lies, how can we solve them? How can we decrease these negative impacts and build a system that works for everyone? We may have a few solutions to this.

The first one, and the first step to address psychological heuristics, biases, and discrimination is that we should that these effects do exist. According to a survey, although a majority of male attorneys and judges believed that bias against women does exist, most believed that it only existed in a few areas and involves only a few individuals. However, in reality, it is widespread. This is because gender bias, a form of implicit bias, is very likely to be unintentional. For any institution to do anything to solve this matter, we first have to acknowledge the fact that human beings are imperfect and are bound to have problems. Then, policy changes can be considered to work. For tackling gender discrimination, these “recognition” may include: addressing everyone in courts using appropriate titles (research found that female lawyers are more likely to be called by the first names, while male lawyers with mister)

The second one is fairly simple. We have already found that the verdicts of judges are heavily correlated with judge’s rest and current mood. Then why don’t we allow judges to get the rest and refreshment they need? The size of the workload carried by senior judges is almost twice what it was two decades ago. In 1996, senior judges “terminated” 14 percent of all civil and criminal matters. By 2014, the courts had jumped to 24 percent.

Civil and criminal filings in the federal district courts are substantially higher than they were 20 years ago – rising 28 percent since 1993. But the number of federal judges provided by Congress to handle these fillings has barely changed, growing by only 4 percent in the same two-decade period. Most importantly, we advocate for Congress to increase the number of judges readily available, which solves the workload issue fundamentally. However, if that is not policy-pragmatic enough, we should resort to other simpler ways. Experiments have shown that a break as simple as consuming a quick snack would refresh the judge’s mind and bolster their judgment accuracy. Courts should provide judges more of these opportunities to rest so that their work would be more efficient.

Our second proposal is to launch an education campaign for judges to learn about psychological heuristics or biases they themselves may encounter. Previous education efforts on economics have proven to be successful.

Experimenters Ash, Chen, and Naidu in 2017 tested the effects of economic classes for judges. After training judges in economic classes, their votes were shifted by 10%. In the district courts, when judges were given discretion in sentencing, economics trained judges immediately rendered 20% longer sentences relative to the non-economics counterparts. And these sentences were far more accurate. Proponents claim that the same educational campaign on psychology would render similar favorable results. This may also give boosts to the implicit discrimination problem. As we mentioned before, anticipation caused more positive implicit attitudes towards ethnic minorities. If these attitudes can be “taught” beforehand, many of the implicit discrimination can be averted.

The last method is rather controversial but provides a glimpse of the future of the of the judicial system. This method will overhaul the current judicial system forever. However, taking this bold approach may eliminate psychological biases altogether. Consider this, currently, judges are humans who are prone to psychological biases. But what if, judges are machines that can precisely give out sentences? Technology such as artificial intelligence, and statistical big data are already growing fast during the past few years. The question, then, is can these intelligent machines be implemented into the judiciary?

The answer may be yes. Machines can, and are already predicting judicial outcomes with machine learning. According to SAS Institute, machine learning is a method of data analysis that automates analytical model building. After training the Support Vector Machine (SVM), researchers can reliably predict the European Court of Human Rights decisions with 79% accuracy. If machines can accurately predict human judgment outcomes, their next step is to accurately judge on their own. In the United States, algorithms are already helping to recommend criminal sentences in some states. In Europe, The Estonian Ministry of Justice has asked Velsberg and his team, leading developers in this area, to design a “robot judge” that could adjudicate small claims disputes of less than 7,000 euro.

In concept, the two parties will upload documents and other relevant information, and the AI will issue a decision that can be appealed to a human judge. A disclaimer, we are not completely eliminating humans from the judicial process, which would be very not humanistic. We are using machine learning technology to assist judges, so that their decision may be more accurate and beneficial to society as a whole. This may also decrease the huge workload judges face, by shifting some of judge’s lesser responsibilities to efficient robots, this leaves them more mental effort to consider the more important factors of the case. In essence, our goal, according to Engstorm and his team of law school and computer science students at Stanford, is that “The promise of AI approach is you get more consistency than we currently have.” This is a good way to eliminate problems caused by the unconscious human brain and mind.

One strong controversy regarding machine learning is the use of governmental databases. For machine learning to work, a complex national database directory has to be created so that machines can be trained using these data. The Pros of governmental databases is that they can connect with each other, making data sharing easier. In addition, some scholar suggests that judges themselves can access to these databases as references to their current cases, which in turn overcomes some of their psychological biases. For example, collecting good base rate information and using it effectively can reduce representative heuristic because the statistics are right there.

These statistics can also combat anchoring, because judges have the exact access to numbers in similar cases. However, opponents claim that this would create big governments, violate civil liberties, and may themselves be inaccurate. For example, modern-day risk assessment tools are often driven by algorithms trained on historical data. And many correlations in these data are in fact, just correlations, and have no causations; for example, algorithms find that low income is correlated with high recidivism, but it would leave you none the wise about whether low income actually caused crime.

The opposition is fierce. Last July, more than 100 civil rights and community-based organizations, including the ACLU and the NAACP, signed a statement urging against the use of risk assessment tools. At the same time, more and more jurisdictions and states, including California, have turned to them in a hail-Marry effort to fix their overburdened jails and prisons. Some critics also point out that if prior judicial decisions are biased, then the database should therefore be biased as well. This can be countered because a comprehensive database uses averages, which would eliminate negligible confounding data. In summary, we still have a long way to go to realize machine learning and AI to play a larger role in the judiciary, but the future still seems bright.

Even if robotic judges are not adopted, machine learning and other modern technology can also be used in other ways. Using AI, training programs could be targeted toward judges to help them learn how to use the hearing process to better advantage, founded on the belief that simply alerting the judges to the fact that their behavior is highly predictable in ways may indicate unfairness, and may be sufficient to change their behavior.