Juridical Will and Algorithmic Harm: Rethinking Legal Personality In The Age Of AI

[By Saloni Rani, second year student, Rajiv Gandhi National University of Law]

I. INTRODUCTION

Artificial Intelligence (hereinafter, AI) technology is deployed in almost every sector such as healthcare, banking, accountancy, gaming, in the contemporary era. This raises one important and pressing jurisprudential concern: should AI be recognised as a legal person or not? The European Parliament’s 2017 Resolution on Civil Law Rules on Robotics entertained the possibility of ‘personhood’ for autonomous robots (AI), without conferring such status. It may appear promising to some scholars to fix liability on AI if its model is acting  as an independent legal actor. On the contrary, this piece contends that this reasoning is misframed as it mistakes the foundational doctrinal problem by focusing on what AI is rather than what AI does and who controls it. The present challenge is not the determination of legal person but rather how, once harm has passed through complex technological operations, liability can be attributed and upon whom it shall fall? This piece argues the situation through the lens of classical private law. Taking into consideration Savigny’s volitional theory of legal personality and the concept of corporate personhood along with veil doctrine, it is argued that AI despite its autonomous set-up fails to satisfy this jurisprudential threshold associated with juridical will.

Therefore, this article advances that AI shall be viewed as a non-volitional agent where liability ought to be fixed at deploying enterprise or individual. Granting AI legal personality at this stage would lead to ‘digital veil’, where deploying enterprise may exploit algorithm autonomy as a shield to evade liability. This demonstrates that existing frameworks are sufficiently equipped to deal with AI mediated harm but, only if courts and regulators instead of personifying machines focus on attributing liability to corporate or individual. The essay proceeds in three parts: first, byfitting AI within jurisprudential aspect of legal personality through the lens of juridical will, second by highlighting certain reflections from this analysis and third, by encapsulating the essence of agency and corporate veil, the author proposes solutions. 

II. LEGAL PERSONALITY AND AI

Legal personality refers to recognition of an entity on whom rights and liabilities can be vested. Jurisprudence recognises two contemporary categories of people namely, natural persons who are human beings, and artificial persons, subdivided into Corporation Sole and corporate aggregate. Corporate sole means where personality is attached to an office, and Corporation Aggregate means when individuals are united into a distinct legal entity. The concept of corporate aggregate gained prominence after the ruling of Salomon v A Salomon & Co Ltd by House of Lords, where a company was recognised as a separate  legal status. Thereby, any entity vested with legal personality can be made liable in court of law for any act. Although it is based on the doctrine of corporate veil which separates the liability of directors and other individuals involved in company from its official affairs, this can be pierced by courts when required, as held in Gilford Motor Co v Horne[i]. The court in this case, pierced the corporate form to prevent fraudulent evasions. Friedrich Carl von Savigny, a German jurist in the Will theory offered an explanation to the concept of legal personality in his System of Modern Roman Law. On the other hand, he related it to juridical will associating with only those who have mental capacity to form intention and take responsibility for conduct arguably, a threshold which can be satisfied by a living being. As said by Naffine, this threshold is inherently tied to humans. Precisely, it is based on what is termed as volitional conduct rather than any metaphysical operations. To illustrate a person who enters into a contract has to first have an intention which will make an obligation on him as he holds legal personality. Modern theories such as Hans Kelsen  depart from this traditional concept by expanding fictional basis of legal personality to corporations and companies taking into consideration the organisational interest they serve. A company, governed by board of directors acts through the volitional conduct of humans. In both the cases, the obligation flows from volitional conduct.

The point of discussion is whether this reasoning can be extended to determination of legal personality to AI or not. If seen from the doctrinal edge of above analysis, it neither possesses juridical will nor serve the organisational function. The erroneous responses generated by  chatbots using large language models are not intentional or volitional, but rather a product of complex computation operations. AI has no individual interest or will to speak for itself. The first rationality in not imposing liability is clear. One answer may be this. If it is seen as a legal actor at this point, it will result in nothing but origin of ‘digital veil’- a situation where the deploying individual intentionally corrupts the algorithm of system to generate fabricated findings.

If it is viewed as non-volitional agent, court can draw the established principle to pierce the veil on liability of actual actor governing it. Thus, the existing framework of doctrine of corporate veil if governed properly is equipped to address this harm.  Unlike a corporation which can claim a fictional identity through juridical will of human directors, AI  lacks this foundational basis. However, the author argues it may appear complete as a provision, it is not. When we trace the applicability of this, it requires strong judicial scrutiny and statutory mechanism.

III. REFLECTIONS

Relating AI with existing frameworks to attribute liability on real actor is yet another level of complexity in itself. The technology deployed in AI passes through a number of checks and balances involving complex hub of humans. The doctrinal gap is whether human control is always assessable in these algorithmic operations or not? If control is diffused and liability cannot be foreseen, the challenge will be to locate the fragmented responsibility not the absence of humans. Therefore, proximity in such a scenario becomes conceptually weak. Irrespective of complexities, it does not authorise AI to be conferred with legal personhood. Dias rightly said that personality can only be conferred with identifiable human interest and accountability which is absent in AI. Juridical will cannot be applied under the ambit of convenience. As AI is non-volitional entity, application of juridical personality will dilute the conceptual foundation of private law. One of the primary challenges is ‘black box problem’. In simple words, it makes traceability of AI very complex for humans to access their originality and liability. Under Torts law, this liability can be fixed around the breach, damage or duty.. While this undoubtedly challenge the principle of foreseeability in tort law, strict judicial scrutiny can serve as an evidentiary solution. This has been proved by tort law itself by evolving from first industrial revolution, cyberspace, and technological advancements. The doctrine of respondeat superior in this context, could lay a strong foundational base whereby liability can be fixed at the employer to assess the harm. In case of diffused causation, liability can be fixed on multiple market players. Consequently, the main task is to refine the attribution process rather than personify it.

Author contends that this can create a structural consistency, and build a strong foundational base for assessing liability for harm.

IV. BLUEPRINT FOR REFORMS

If the liability of harm caused by non-volitional conduct of AI has to be assessed, attribution framework  requires structural reform, a clearer legislative standard for assessing the liability and stronger judicial tools to trace causation in algorithmic system. To plug the gaps identified,  legislative and judicial intervention are required. Two changes are required.

Firstly, the courts must expand the interpretation of respondeat superior to assess the AI mediated harm. The employer conducting business operation on AI carrying commercial benefits shall bear the responsibility of AI mediated harm. To operationalise this, two tier framework is needed. At first tier, the deployer shall register AI with designated statutory authority along with complete documentation, functioning of algorithmic system and training data. At second tier, the authority shall conduct audit check to assure if the AI is carrying operations within limits. This incentive entrepreneur to frequently examine the algorithmic conduct. Further, insurance shall also be provided against any harm caused by AI to prevent monetary losses in businesses.

Secondly the concept of ‘digital veil’ similar to what has been referred by court in Salomon must be equipped. This is quite similar to the first reform but it extends the liability on the persons directly or indirectly governing AI.

These reforms can significantly give shape to already existing statutes and principles. The need of the hour is to effectively implement them.


[i] Gilford Motor Co v Horne  [1933] Ch 935 (CA).


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *