Sunstein Insights Shape Created with Sketch.

Back to All Publications

Automated Decision Making: An EU Decision and its Implications for US Law

Thomas C. Carey

Thomas C. Carey | Partner, Business Chair View more articles

Thomas is a member of our Business Practice Group

A recent EU court decision demonstrates a subtle difference between EU and US privacy laws, and may ultimately influence how US laws are interpreted.

The decision of the EU General Court emanated from a court proceeding in Germany that involved a credit rating agency. An individual who had been turned down for a loan on the basis of a poor credit score sued, arguing that the credit rating agency had engaged in automated decision-making that harmed her financially, in violation of the EU’s General Data Protection Regulation (GDPR).

Article 22 of the GDPR gives EU residents the right not to be subject to such decisions without their consent. Even if consent is given, Article 22 requires that the data controller “implement suitable measures to safeguard the data subject's rights … [including] at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”

In this case, the credit rating agency argued that it did not make credit decisions, it merely provided a credit rating that the bank could rely on or not in making its decision to grant a loan. The EU General Court was having none of that. In the view of the Court, the bank’s decision was a forgone conclusion once a negative rating was issued.

Thus, the credit rating agency was subject to the requirement of GDPR Article 22, which gave the prospective borrower the right to make her case to a human at the agency.

What is the relevance to US law? Several US states have enacted general privacy laws modeled in part on the GDPR. Most of those laws give individuals the right to opt out of automated processing of personal data that produces legal or similarly significant effects on a consumer. Such provisions are currently in effect in Colorado, Connecticut, Delaware, Indiana, Montana, Oregon, Texas and Virginia. Other states have similar provisions in process.

The EU decision may end up being a basis for a court in one of those states to reason that a negative credit rating has a sufficient effect on a consumer, and therefore that state’s law concerning automated decision making applies. In addition, the EU decision may also be precedent for the notion that credit rating agencies cannot evade the effect of the law by saying that they are not the actual decision makers. Thus, the EU decision may end up influencing the rights of US consumers.

There is, however, an important meaningful difference between the GDPR and the US state privacy laws: only the GDPR gives the consumer the right, even after having consented to automated decision-making, to deal with a human and make the case for a better result. The US laws simply give the consumer the right to opt out of such decision making. In the case of credit scores, this is likely to be an ineffective remedy because few lenders will be willing to make a loan without having the borrower’s credit score. The EU decision may highlight this gap in protection, possibly leading to reform in US privacy laws.

The principle may extend well beyond credit ratings and into other automated processes such as admissions to universities, the handling of unemployment benefits, and the provision of other social services.

In fact, the Biden Administration, in its October 30, 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “EO”), about which my partner Shane Hunter as written here, attempts to cover that issue. The EO says that “Artificial Intelligence is making it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires. Artificial Intelligence’s capabilities in these areas can increase the risk that personal data could be exploited and exposed. To combat this risk, the Federal Government will ensure that the collection, use, and retention of data is lawful, is secure, and mitigates privacy and confidentiality risks.”

There may be a considerable gap between the federal government’s stated policy and its ability to enforce it. The United States lacks any comprehensive privacy law like the GDPR, and the states are mid-way through adopting a patchwork of privacy laws that often give individual a right to opt out of automated decision-making but no right to deal directly with a human when such decision-making is deployed.

We use cookies to improve your site experience, distinguish you from other users and support the marketing of our services. These cookies may store your personal information. By continuing to use our website, you agree to the storing of cookies on your device. For more information, please visit our Privacy Notice.

Subscribe to our Newsletters

Subscribe to: