With the world-changing and the technology developing at a faster pace, the world is moving towards transformation. The evolving technologies, especially the Artificial Intelligence (AI) and Machine Learning (MI) have travelled a way ahead in their course of transformation. AI and MI, along with other evolving technologies such as speed recognition and natural language, have reached a nexus of capability. However, along with such developments various issues such as ownership, accountability, representation and management with regard to AI have emerged. And such issues have brought us to a point where the major question arises “Whether AI systems be given a legal status or not?”. For obvious reason the debate on this topic is never-ending. But the growing development hints towards incoming legal effects when it comes to AI and in such a case proper adaptation of laws is very important. Giving a legal status to AI also emphasizes on providing Robotics with a legal personality.

The present article deals with the subject of Legal Personhood of AI systems and the arguments for and against the same. The need for introduction of a framework to deal with the AI and MI is of major concern. Following this another concern that arises is about the preconditions that should be established in order to introduce the framework. The author in the present article has made an attempt to provide an insight to these topics.


An entity is said to have a legal personhood when it is a subject of legal rights and duties.[1] Two types of legal personhood are recognised in law, namely, Natural and Judicial.

  • Natural – Those which are recognised because of the simple fact of them being humans.[2]
  • Judicial – Those which are non-human in nature and have been granted certain rights and duties by law.[3]

Under the legal system, Judicial legal status has been provided to corporations, religious entities, governmental and intergovernmental entities, etc. As per some Scholars such status can also be provided to the robotics and to emerging AI system.

The judicial personality is based on three theories, namely,

  • Aggregate theory: Individual members work in a group as a single entity, while establishing individual contractual relations, for cost cutting.
  • Fiction and concession theory: Non-human entities have a personality because the legal system choses to give it to them.
  • Realist theory: Suggests conferment personality to non-human entities as a matter of right.

A closer look at above the three theories makes it very clear that the ‘Aggregate Theory’ is least applicable to AI systems, whereas, ‘Fiction and Concession theory’ can be extended to AI systems.

When it comes to the natural legal personhood, there have been no particular theories. The fact is that the concept of natural legal personality is completely rooted to the legal system, and in such a scenario it is almost impossible to articulate it.



The question whether AI systems should be given a legal personhood or not, has led to a number of questions. Firstly, they should be subjected to complete legal rights and duties or to a specific set of legal rights and duties. Secondly whether they should be provided with only rights or only duties. 

In the case where specific set of legal rights and duties have been provided, there are chances that they may not be the same for every entity. If only rights are conferred upon the AI systems, it would create problems of standing, as it would enable human individuals to act on behalf of a non-human rights holder, rather than requiring them to establish standing in their own capacity.[4] If only duties are conferred upon them, it would lead to accountability issues, problems of imposition of civil liabilities such as damages, etc.


The arguments in favour of granting legal personhood to AI systems are under the vision wherein they keep the ‘Robotics Rights’ in parallel with the ‘human rights. The individuals in favour of this argue that the rights of robots should also be recognised. The arguments in favour have also listed out various points which will be advantageous to human beings too in the long run.

According to Jurist, if a legal personality is conferred to AI systems, it would ensure that there is someone who could be blamed upon when things go wrong. This is presented as an answer to potential accountability gaps created by their speed, autonomy and opacity.[5] Not only the AI systems can be punished through the ways of retribution, incapacitation, deterrence, and rehabilitation but can also be then compared to corporations. This will further help to bring the AI system under the Jurisdiction of both Civil and Criminal Courts. In cases of extreme default, there would be rights to even destroy the robot completely. If the situation requires, the robots could also be fined or have its property seized, or a license to operate could be suspended or revoked.

Conferring a legal personality which also ensure accountability on the part of the works and actions of the AI system. This will further help in ensuring the ethical principles of AI – Accountability, Responsibility and Transparency, are some ethical principles of AI.[6]

  • Accountability in the AI system requires both the function of guiding action and the function of explanation
  • Responsibility refers not only to the role of people but also the capability of AI systems which not only answers one’s decision but also identifies errors or unexpected results.
  • Transparency further refers to the need to not only describe, inspect and reproduce the mechanisms through which AI systems but also make decisions and learns to adapt to its environment, and to the governance of the data used created.

Gabriel Hallevy, the best-known defender of AI punishment, contends that ‘when an AI entity established all elements of a specific offence, both external and internal, there is no reason to prevent imposition of criminal liability upon that offence.[7] He concludes that ‘there is no substantive legal difference between the idea of criminal liability imposed on corporations and on AI entities.[8]

Conferring legal personality to AI systems would help in ensuring the ownership of work done by the AI system to be with the AI system, as opposed to it being with the parent owner of AI system. In those cases where something has been created by the AI the ownership rights, i.e. the IPR rights will lie with them and the human wouldn’t be able to take the credit. But not to much surprise, in most of the legal systems around the world, the person claiming IP has to be a legal person, and not a judicial person.[9] Due to this, legal personalities other than humans are denied ownership of the IP created by it. As per WHO such a system favours ‘the dignity of human creativity over machine creativity’.

The conferment of legal personality to AI systems would ensure their protection from human manipulation. Since legal personhood would ensure the AI system the ability to sue and to be sued, it would have its own recognition and independent identity. This would ensure lesser chances of it being manipulated for the interests of humans. Additionally, a system of lifting the veil, as it exists for corporations, can also be created for AI systems. This would add to the protection of AI systems from human manipulation.[10] This is in the interests of the AI systems, and is practically possible only if legal personality is conferred upon it.

The conferment of legal personality to AI systems would ensure their entry into contracts.[11] The use of electronic agents in order to conclude binding agreements is hardly new. High-frequency trading, for example, relies on algorithms concluding agreements with other algorithms on behalf of traditional persons.[12] Therefore, granting personhood to such AI systems would ensure better smoothness of work and would improve the potential accountability gaps raised by AI in relation to entry into contracts.

Not only this, if legal personality is conferred upon the system, their legal rights would be recognised. Instead of being treated as slaves, they would be treated as employees.


The way a coin is two sided similarly, every argument has two approaches. Undoubtedly if many scholars are in the view of providing legal status to AI, many are against the same. According to many, if legal personality is conferred on Robots, it will lead to various problems. If robots are granted legal status, there may come a time when the question of granting the same to every other AI and MI may arise. And this will obviously create unnecessary problems.

It would further pose a huge threat to human owner of the system. Conferring a legal personality to the robots will led to the creation of principle-agent/ Master-servant relationship between the owner and the robot, which will make the owner being charged for the acts of the machine by strict liability.[13]

It is further speculated by various AI experts that if AI systems do eventually match human intelligence, they wouldn’t stop there. They can also go further in doing extra-normal activities, which may be negative. In many cases, there won’t be any way to find out whether AI acted according to the instructions given by the owner, or according to its own recoding of instructions. And this would be decremental to the interests of the owner, thereby asking him liable.

If legal rights are conferred upon them, the option of destroying such AI system would be eliminated, which could be very dangerous for the existence of humanity.[14]

Providing legal rights will further confer IP rights to the Robots. This would not only result in disregarding the owner’s effort in making the AI system but also the credit of all the work done by the AI system will remain with it, and the owner would not even have the moral rights of that work.[15]This would eventually result in lack of motivation, and would not be in the best interests.

Also, there can be instances wherein the owners can misuse the ‘Separate legal entity’ status provided to the Robots, by conferring all responsibility and liability on them and evade their own liability similar to the way the concept of ‘separate legal entity’ lifting provides an advantage to the shareholders of the company, there are chances that the owners may take undue advantage of the same.

Conferring legal personhood to the AI system would also not be in the interest of society. The arguments for granting such a personhood are not sufficient to showcase that such a personhood should be granted in the first place. Hence, conferring legal personality to the AI system is not suggested.


The forgoing discussion deals with the fact whether the AI system should be provided a legal status or not. On one hand various individuals suggest conferring the legal status, on the other hand many are against it. Whether conferring legal personality on AI system is desirable or not is a matter of concern. The decision on such a matter is dependent on the on the actual social necessity. Most importantly, it needs to be figured out whether the future society can function without conferring such a status or not. In case they can’t the need is to first set the preconditions and then to provide framework and check if the AI system could fit within such a framework.

The most important parameter in deciding whether the AI system should be granted legal status or not depends on the fact whether it is in the interest of the society. Only if it is the best interest for the society, the question on providing legal status should be considered. That being said, it would be safe to conclude by stating that, “The consideration that an autonomously functioning artificially intelligent robot should have a secure legal subjectivity is dependent on the actual social necessity in a certain legal and social order”.


[1] Solum, L. B. (1992). ‘Legal personhood for artificial intelligences. North Carolina Law Review, 70(4), 1238–1239.

[2] N Naffine, ‘Who Are Law’s Persons? From Cheshire Cats to Responsible Subjects’ (2003) 66 MLR 346.

[3] Ibid.

[4]C Rodgers, ‘A New Approach to Protecting Ecosystems’ (2017) 19 EnvLReev 266.

[5]S Chesterman, ‘Artificial Intelligence and the Problem of Autonomy’ (2020) 1 NotreDame Journal of EmergingTechnologies 210; S Chesterman, ‘Through a Glass, Darkly:Artificial Intelligence and the Problem of Opacity’ (2021) AJCL (forthcoming).

[6]Virginia Dignum, ‘The ART of AI – Accountability, Responsibility, Transparency’– Mar 4, 2018. https://medium.com/@virginiadignum/the-art-of-ai-accountability-responsibility-transparency-48666ec92ea5

[7]Gabriel Hallevy, ‘The Criminal Liability of Artificial Intelligence Entities’ — From Science Fiction to Legal Social Control, 4 AKRON INTELL. PROP. J. 171, 191 (2010).


[9]Copyright, Designs and Patents Act 1988 (UK), section 9(3), Copyright Act 1994 (NZ), section 5(2)(a), Copyright Amendment Act 1994 (India), section 2, Copyright Ordinance 1997 (HK), section 11(3), Copyright and Related Rights Act 2000 (Ireland), section 21(f).

[10]J Turner, ‘Robot Rules: Regulating Artificial Intelligence’ (Palgrave Macmillan 2019) 193.

[11]S Chopra and LF White, A Legal Theory for Autonomous Artificial Agents (Universityof Michigan Press 2011) 160.

[12]T Cuk and A van Waeyenberge, ‘European Legal Framework for Algorithmic and HighFrequency Trading (Mifid 2 and MAR) A Global Approach to Managing the Risks of the ModernTrading Paradigm’ (2018) 9 EJRR 146

[13]Ryan Abbott & Alex Sarch, ‘Punishing Artificial Intelligence: Legal Fiction or Science Fiction’,53 UC Davis Law Review 1, 323 (2019).

[14]N Bostrom, ‘Superintelligence: Paths, Dangers, Strategies’ (Oxford University Press 2014).

[15]Marcelo Corrales& Mark Fenwick, Robotics, AI and the Future of Law -‘Do We Need New Legal Personhood in the Age of Robots and AI’, Perspectives in Law, Business and Innovation.

Leave a Reply

Your email address will not be published. Required fields are marked *

Open chat