The Future of Artificial Intelligence in the UK legal system regarding Liability.

in #law8 years ago

We've all heard of Artificial Intelligence (AI), whether it be through movies such as the terminator or through recent advancements such as Teslas self driving car. AI is a very real possibility in our life time, yet the legal system is yet to update old laws such as the Non-Fatal offences let alone laws to account for AI. I will evaluate the liability of defendants when AI fails and causes damage or injury. I will do this by looking at current laws and theories and how AI could be implemented in to them.

Currently the status of AI in the UK legal system is in dire need of updating in order to capture new ways of offending. This is due to recent advances in AI and technology on a whole, which requires the law to be updated and adapted to accurately capture the advances and achieve justice. AI is when machines like computers display intelligence.

In criminal cases liability is commonly based on fault, and it seems right that this should be so and should apply to AI and its failings too. Professor Hart (1961) stated that “the principle that punishment should be restricted to those who have voluntarily broken the law…is a requirement of justice.” For example there is a general requirement in law for a voluntary act which recognises that defendants should only be liable if acting freely. This is illustrated in Broome (1987), the defendant was guilty as he had control at times irrespective of his hypoglycaemia. With AI establishing fault would be difficult and could span across multiple Defendants’s all of which could be seen as being at fault. For example the manufacturer of the AI could be at fault for the malfunctions of the AI, whereas the user could be at fault for being ‘reckless’ as to the use of the AI. This can be displayed through the recent crash of Teslas self driving car when the driver was killed. Tesla put the fault in the driver as they stated it “still requires the driver to remain alert”(News.com, 2016). The evidence could still pin the fault on Tesla, however, as the car failed to distinguish between a tractor “against a brightly lit sky”(News.com, 2016). This displays how establishing fault in such cases would prove difficult for the courts.

According to the Latin maxim “Actus non facti reum nisi mens sit rea” an Actus Reus and Mens Rea are both required for someone to be culpable. Severe crimes all require a Mens Rea, an absence of this would lead to no liability for the defendant. British law recognises that there are different degrees of blame for offences by having by having multiple states of mind for to reflect the seriousness of the offence. Serious crimes such as murder and Grievous Bodily Harm s.18 require intention which is more blameworthy than subjective recklessness (SR) which is used as a Mens Rea for less serious crimes such as assault and battery as SR is where the D realises the risk but carries on. In terms of AI a Mens Rea of SR would probably be much better suited even if the AI committed a serious crime such as ‘murder’. This is because if the manufacturer was to be held liable then they most likely wouldn't create an AI with the intent to kill but rather they’d be SR as to a malfunction of some kind which causes a death, for example, a self driving car which hits and kills a a bystander. The Mens Rea couldn’t be gained from the AI as punishing an AI would be very difficult and pointless as it wouldn't make sense to punish AI when it's manufacturer could be held liable which would lead to higher standards of care.

One possible way to overcome this issue would be through the use of vicarious liability. This can be displayed in civil cases and implemented into civil cases involving AI. Vicarious liability can be displayed in Rose v Plenty (1976) where a milkman took a 13 year old boy with him, despite his employers telling him not to. Then when the boy got injured the Dairy company who employed the driver were liable despite not being at fault. This could aid liability in civil cases involving AI as it would mean the manufacturer would be liable allowing the claimants to be properly compensated for any damages to them or their property. As Michael A. Jones (2000) states “the master has the ‘deepest pockets’” displaying how they would be much more suited to compensating the claimant. A problem with using vicarious liability, however, would be establishing whether the AI is an ‘employee’ which is needed for vicarious liability. This can mean different things in different situations. For example Priests are never seen as ‘employees’. However, in JGE v the trustees of the Portsmouth Roman Catholic diocesan trust (2012), because the Priest had a relationship with the trust akin to that of employee and employer it was treated as being the same thing. This displays how AI, although a ‘product’ of its ‘employer’ could be seen as an ‘employee’ depending on how the courts would interpret the relationship.

If vicarious liability was to be adapted for use in cases involving AI, it would bring with it both advantages and disadvantages. One advantage is that it would protect the public as it is more important in the interests of society that the public be protected rather than an individual being proven to be at fault. This can be displayed in Callow v Tillstone (1900) in which it was centred around protecting the public from unfit food. This could be implemented into laws surrounding AI as it could lead to companies being more careful when developing this technology, therefore protecting the public from harm. However, a big disadvantage is that vicarious liability seems to unjustly punish an employer who is not at fault and even told an employee not to do something. For example in Harrow MBC v Shah (1999) when a lottery ticket was sold to a 13 year old even though the shop owners had warned staff to check for identification and not make such sales. This could happen in Cases involving AI. If an AI malfunctioned and, for example, harmed someone, then it is unjust to punish the employers as they did not mean for it to happen and most likely took precautions to prevent it from occurring.

To conclude, due to AI being a very recent technology there are not many, if any specific laws surrounding it and the liability which comes with it. As my essays display, the liability could fall on multiple Defendants which could all be seen as being at ‘fault’. There’s multiple ways in which this problem could be overcome, one of which is vicarious liability which would allow the claimant/victim to be adequately compensated for the damages caused. With AI becoming more advanced day by day and the usage in everyday life becoming more apparent, the law will have to adapt to these changes as quickly as possible to ensure that justice is properly carried out and doesn’t allow ‘loopholes’ in the current laws to be taken advantage of, leading to injustices for many people.

Sort:  

Congratulations @mrainsleylewis97! You have received a personal award!

Happy Birthday - 1 Year on Steemit Happy Birthday - 1 Year on Steemit
Click on the badge to view your own Board of Honor on SteemitBoard.

For more information about this award, click here

By upvoting this notification, you can help all Steemit users. Learn how here!

Congratulations @mrainsleylewis97! You received a personal award!

Happy Birthday! - You are on the Steem blockchain for 3 years!

You can view your badges on your Steem Board and compare to others on the Steem Ranking

Vote for @Steemitboard as a witness to get one more award and increased upvotes!