AI on Trial

AI on Trial

Level: C1/ C2

Type of English: general

Lesson activities: listening comprehension, vocabulary building, speaking practice. 

Case Study 1: Legal rights for AI. Case Study 2 : AI legal assistants

Tags: AI, automation, legal system, AI on Trial, disruptive

The complex legal system is grappling with the emerging role of Artificial Intelligence (AI). Through engaging discussions and case studies, we will explore the profound ethical and legal implications surrounding the accountability of AI, inviting you to ponder the future of justice in an increasingly automated world.

Can we hold AI responsible for its actions?

Should we grant AI legal rights?

How do we strike a balance between innovation and accountability?

Get ready to question the very foundations of our legal system!

During the 1-hour class your teacher will take notes of your mistakes and make corrections.

These will be emailed to you after the lesson.

It is a good idea to revise the same vocabulary at the start of the next lesson.

Opening Questions:

Are you using or testing AI in your personal life or with work?

In general, are you optimistic or pessimistic about the future?

Are you worried that AI could take your job? Do you think any job is safe?

What legal dangers could exist with the development of AI?

Should governments be responsible for protecting us from dangers associated with new technology? Why / why not?

What do you think a Tech hearing is? Do you have anything similar in France?

Have you ever done a fact-finding mission?

What is Pandora’s box?

Can you put Pandora back into the box after opening it?

Do you agree that blockchain technology was a game changer?

Would an AI legal assistant be useful to you?

New Vocabulary Questions:

Can you identify something that you are accustomed to

What is another way to say accustomed to?

What is government oversight? What is it for?

What is CEO an abbreviation for?

If we are on the same page, what does it mean?

What are the stakes in a game of poker?

If the stakes are high, what does it mean?

Can you describe a nightmare scenario?

Do you agree that pineapple goes hand-in-hand with pizza?

Can you use the adjective accusatory to describe someone?

Do you like the sound of concrete legislation?

What do guardrails do and where can you find legal guardrails and physical guardrails?

What is disruptive tech and can you think of some examples?

Do you have a lot of tasks to do today?

If I put tasks into your hands, what did I do?

Watch the Video:

OpenAI CEO Sam Altman call fors A.I. oversight in testimony to congress

Task: As you are listening, write down keywords for the following:

  1. the worst possible things that could happen as a result of AI
  2. referring to a previous tech hearing which was described as reactive
  3. potential action taken against AI that has yet to be decided upon

Listening Comprehension Questions: T/F

Answer the questions true (T), false (F) or not given (NG)

  1. The AI hearing seemed less confrontational than previous hearings.

  2. The hearing only focused on the negative effects of AI.

  3. Amongst other things, generative AI was compared to the creation of antibiotics.

  4. OpenAI’s CEO said that companies will have to act responsibly with regard to AI.

  5. During the hearing, concrete legislation was suggested.

  6. Blockchain is the technology that is used to create many cryptocurrencies.

  7. Generative AI has already been used to change the way we watch Netflix or search for things online.

  8. AI is replacing blockchain, as it is more successful in business.

Listening Comprehension Questions:

What other technologies did they compare generative AI to? 

Why did the presenter compare blockchain to AI?

What are some of the ways that generative AI is already changing the way we behave?


Can an AI system be held legally accountable for its actions, and if so, how can we define its responsibility in the absence of human consciousness?

Should AI be granted legal rights, similar to personhood, or should its creators be held liable for any harm caused by their AI creations?

As we embrace AI’s potential to streamline processes and improve efficiency, how do we ensure that the legal system maintains its core principles of fairness, transparency, and justice?

If machines were able to think for themselves, it could open a Pandora’s box of problems.

New Vocabulary Practice:

When we use new vocabulary in a different context It helps to memorize. 

Which accents are you accustomed to and not accustomed to

Do we need oversight for new technologies or should we let the free market control it?

In regards to AI regulation, do you think you are on the same page as the rest of the group?

What is the connection between stakes and the stakeholders?

Can you think of a fun story that was a nightmare scenario?

What drink goes hand-in-hand with the summer holidays?

Can you use the word accusatory in a sentence?

Can you think of a good example of concrete legislation?

In your opinion, what does the French system need guardrails for?

Can you predict the next disruptive tech or idea? 

Case Study 1

Could an artificial intelligence be considered a person under the law?

Humans aren’t the only people in society – at least according to the law. In the U.S., corporations have been given rights of free speech and religion. Some natural features also have person-like rights. But both of those required changes to the legal system. A new argument has laid a path for artificial intelligence systems to be recognized as people too – without any legislation, court rulings or other revisions to existing law.

Legal scholar Shawn Bayer has shown that anyone can confer legal personhood on a computer system, by putting it in control of a limited liability corporation in the U.S. If that maneuver is upheld in courts, artificial intelligence systems would be able to own property, sue, hire lawyers and enjoy freedom of speech and other protections under the law. In my view, human rights and dignity would suffer as a result.

The corporate loophole

Giving AIs rights similar to humans involves a technical lawyerly maneuver. It starts with one person setting up two limited liability companies and turning over control of each company to a separate autonomous or artificially intelligent system. Then the person would add each company as a member of the other LLC. In the last step, the person would withdraw from both LLCs, leaving each LLC – a corporate entity with legal personhood – governed only by the other’s AI system.

That process doesn’t require the computer system to have any particular level of intelligence or capability. It could just be a sequence of “if” statements looking, for example, at the stock market and making decisions to buy and sell based on prices falling or rising. It could even be an algorithm that makes decisions randomly, or an emulation of an amoeba.

Reducing human status

Granting human rights to a computer would degrade human dignity. For instance, when Saudi Arabia granted citizenship to a robot called Sophia, human women, including feminist scholars, objected, noting that the robot was given more rights than many Saudi women have.

In certain places, some people might have fewer rights than nonintelligent software and robots. In countries that limit citizens’ rights to free speech, free religious practice and expression of sexuality, corporations – potentially including AI-run companies – could have more rights. That would be an enormous indignity.

The risk doesn’t end there: If AI systems became more intelligent than people, humans could be relegated to an inferior role – as workers hired and fired by AI corporate overlords – or even challenged for social dominance.

Artificial intelligence systems could be tasked with law enforcement among human populations – acting as judges, jurors, jailers, and even executioners. Warrior robots could similarly be assigned to the military and given the power to decide on targets and acceptable collateral damage – even in violation of international humanitarian laws. Most legal systems are not set up to punish robots or otherwise hold them accountable for wrongdoing.

Case Study 2

AI legal assistant takes its first court case:

Hearing next month will see the defendant get advice from artificial intelligence using a smartphone app

  • DoNotPay’s robot lawyer will advise a defendant on what to say in court 
  • The defendant will use a smartphone app and earpiece to hear the advice
  • This is the first time a person will be represented by artificial intelligence 
  • The courthouse, charges and name of the defendant have not been revealed 

A court hearing in February is set to make history when the defendant is advised by artificial intelligence. 

The technology stems from the company DoNotPay, founded in 2015 by a then-Stanford University freshman, that was initially developed to appeal parking tickets.

The world’s first robot lawyer will run on the defendant’s smartphone and listen to commentary to provide its client with instructions on what to say in arguments.

The courthouse location, charges, and name of the defendant have not been revealed, according to New Scientist.

Joshua Browder initially created the robot to appeal parking tickets in the UK when he first launched the technology but has since expanded it to the US.

However, this technology was designed in a chat format where the bot would proceed with questions to learn the details of your case, such as ‘were you or someone you know driving?’ or ‘was it hard to understand the parking signs?’

After it analyzes your answers, the robot decides if you qualify for an appeal, if yes, it will generate an appeal letter that can be brought to the courts.

A similar format will be used in the February court case, but will ‘listen’ to conversations between the prosecutor and defendant to advise its client on what to say next.

The AI, however, was trained on factual statements to ‘minimize legal liability,’ Browder told New Scientist.

He also tweaked the audio tool not to react to statements instantly, instead letting the offense finish their discussion, analyze comments and then present a solution

‘It’s all about language, and that’s what lawyers charge hundreds or thousands of dollars an hour to do,’ Browder said, who believes this technology will one day replace lawyers.

‘There’ll still be a lot of good lawyers out there who may be arguing in the European Court of Human Rights, but a lot of lawyers are just charging way too much money to copy and paste documents and I think they will definitely be replaced, and they should be replaced.’

 DoNotPay’s website shows its technology can be used for things other than a robot: fight corporations; beat bureaucracy; find hidden money; and sue anyone.

The robot has learned the laws about canceled and delayed flights and payment protection insurance. 

DoNotPay also offers consumer and workplace rights advice to people in the US and the UK, including harassment at work or misleading claims in adverts. 

And it will connect users to outside help, such as pro bono legal representation, for more serious cases. 

China has been the first to use artificial intelligence in the courtroom.

Last July, it was revealed the nation is using the technology to ‘improve’ its court system by recommending laws, drafting documents and alerting ‘perceived human errors’ in rulings. 

Judges must now consult the AI on every case by law, Beijing’s Supreme Court said in an update on the system published this week, and if they go against its recommendation, they must submit a written explanation for why.

The AI has also been connected to police databases and China’s Orwellian social credit system, handing it the power to punish people by automatically putting a thief’s property up for sale online.

China has been developing a ‘smart court’ system since at least 2016 when Chief Justice Qiang Zhou called for technology to improve the ‘fairness, efficiency, and credibility’ of the judicial system.

That has meant introducing robot receptionists to courthouses to offer online legal help, automatic voice recognition recorders in courtrooms that eliminate the need for transcribing, and ‘virtual courtrooms’ where cases can be heard online.

China has even introduced a highly specialized’ internet court’ that deals solely with cases related to the virtual world – such as online loans, domain name disputes, and copyright issues.

Solution and definitions:

accustomed to = Familiar or comfortable with; having experience or being used to something.

oversight = The act or function of overseeing or supervising something.

CEO = Chief Executive Officer, the highest-ranking executive in a company or organization.

on the same page = In agreement or having a shared understanding or viewpoint.

the stakes = The potential risks or consequences involved in a particular situation or decision.

stakes are high = The potential risks or consequences are significant or substantial.

a nightmare scenario = A situation that is extremely undesirable or has disastrous potential.

hand-in-hand = In close association or cooperation.

accusatory = Expressing or implying blame or fault.

concrete legislation = Specific and detailed laws or regulations.

guardrails = Safety measures or guidelines that help prevent or mitigate risks or undesirable outcomes.

disruptive tech = Technology that significantly changes or disrupts existing industries or societal norms.

into your hands = Giving control or responsibility to someone.


access Free Online Course

The 5 Day English Challenge