Search this site
Embedded Files
SGP1979
  • Home
    • Politics
      • Elections Bulgaria 2024
    • Advice
      • Blockchain EBSI
      • AI and Law
    • Projects
    • Research
SGP1979
  • Home
    • Politics
      • Elections Bulgaria 2024
    • Advice
      • Blockchain EBSI
      • AI and Law
    • Projects
    • Research
  • More
    • Home
      • Politics
        • Elections Bulgaria 2024
      • Advice
        • Blockchain EBSI
        • AI and Law
      • Projects
      • Research

Advice

AI and Law

31 May 2023

Pros and Cons of Using AI in Legal Proceedings

There is an active ongoing research in applying Artificial Intelligence into the legal industry (Rissland, Ashley and Loui, 2003). One of the most current topics for example is the AI modelling of judicial decision by application of natural language processing (NPL) and computational models of argument techniques (Bench-Capon and Atkinson, 2022).

There are many pros and cons of implementing AI techniques and algorithms in the various aspects of the legal industry. (Zhong et al., ACL 2020)

Here the focus is on advantages and disadvantages of using AI specifically in legal proceeding -- inside the court system during the litigation process. Some ethical issues are addressed and an overall conclusion and policy advice presented at the end.

Advantages

There is already a very large number of developed tools and application of AI for implementation in the legal process (Footnote 1).

Typical advantages of using AI in legal proceedings include some generally agreed-upon points (Footnote 2) such as efficiency, reduction in human errors, fast automated document processing (Pietrosanti and Graziadio, 1999), assistance with vast amount of data, drafting of legal documents and judgements, improved unbiasedness (Chatziathanasiou, 2022), and other (Cui, 2019).

One example illustrating advantages of using AI is the project “to design, implement and evaluate an explainable decision-support tool for deciding legal cases under Article 6 of the European Convention on Human Rights (ECHR)” (Collenette, Atkinson and Bench-Capon, 2023). The researchers have developed a well-received usable tool with 97% accuracy rate in matching actual decisions of cases. Such tools with demonstrated explainability and trustworthiness may be used to automatically decide admissibility of cases to the court, speed up processing of applications, clear backlog of cases, and reduce human time and effort for manual evaluation. There have also been other studies using NPL techniques successfully predicting the same court's decision outcomes with 79% accuracy (Aletras et al., 2016).

Some possibly less explored advantages are suggested below.

Self-Correction and Judgement Reversals

AI is capable of much faster adaptation of its own rules and principles of decision-making than humans, therefore much more likely to adopt self-correction algorithms and reverse own previous wrongful judgements or positions leading to an improved justice system.

Aggregation of Legal Systems

AI is capable of exploring and aggregating data from any national or international legal system, from recent history or distant ancient times, in any language. The process would force the AI to resolve internal rule-based conflicts which may lead to innovations and improvements in the modern justice system.

Improved Accuracy and Completeness

Humans are limited by time constraints and capabilities for handling large data. AI could review and analyze all available data for each case resolving problems like cherry-picking and biased data selections.

Disadvantages

There is also some popular consensus regarding disadvantages of AI legal applications (Footnote 3) such as high cost of development and implementation, biasedness from embedded rules or values, technical challenges like understandability, interpretability, transparency, algorithmic traceability (Axpe, 2021) or legitimacy (Mommers, 2005).

Some other possible disadvantages are proposed below.

Software Malfunction and Troubleshooting

The complexity of AI code and design requires extensive work by experts to identify and fix any instances of problems. The AI may keep reverting on its own to imbedded undesirable models of legal reasoning representations.

Hardware Maintenance and Energy Consumption

Hardware needs regular maintenance and replacement of components as technological progress advances and amortization take place. Cost of maintenance, repair and high energy consumption may be higher than those of a standard human-based system.

Cyber Security

While cyber security is a serious concern for any digital technology, it is a huge disadvantage to AI in legal proceedings where any breaches can result in severe distortion of the judicial process and final outcomes as well as critical violations of privacy with severe consequences for affected humans.

Ethical Concerns

Ethics is fundamentally defined as the moral principles of right and wrong (Footnote 4) and ethical concerns allude to the potential negative moral consequences of AI implementation (Kazim and Koshiyama, 2021).

Forced Imposition of a Dominant AI

There is a danger of a narrow group of experts covertly imposing their own methods and values onto others through developing AI for judicial legal systems.

For example, there may be a significant threat to national security for any Muslim nation whose normal legal system is based on Quranic principles to import and adopt AI software developed by experts of the Western liberal social values system.

Unpredictability and Loss of Human Control

The larger the datasets onto which the AI is being trained, the more unpredictable and intractable the software behaviour becomes for humans. It poses the threat of human loss of control and understanding of the AI processes and of the entire judicial system.

Spirituality and AI

A large portion of the world human population believes in God as the ultimate moral authority on right and wrong as codified in scriptural Law of God. Human legal experts, in addition to legal training and experience, preserve a separate broader understanding and sensitivity of ethics and morals including on basis of religion. The use of AI may lead to deficiency of spiritual ethical considerations in legal proceedings and to placing formalism and procedural rules over fundamental moral principles of justice, right and wrong.

The area of spirituality, religion and AI would benefit from further research.

Conclusion

The complexity of AI designs necessary for any successful implementations of AI into legal proceedings inevitably creates many advantages and disadvantages. Every use case needs to be evaluated for its risks and benefits and developed applications released for practical implementation only after expected benefits are large enough to warrant the costs while the expected risks are deemed low enough to avoid major harm and disruption to the standard legal process.

Advice

Proceed with serious caution. There are some great dangers of over-automation and AI-utilization in court proceedings. The greatest of all is ending up with a system of great injustice rather than justice. 

This could come from two main sources: 

(1) imbedded biased algorithms by the select "expert" developers done either unintentionally or fully intentionally albeit covertly, with the goal to impose their own political or social engineering agenda on the general population of the world including onto rival political or religious systems. For example -- imposing a liberal ideology on morally conservative societies who normally oppose liberal views and reject legal reforms in such directions. Yet, through automated AI-driven systems, court decisions could end up contradicting the paper laws of those jurisdictions and without human oversight, precedents could be set and legal basis established for a gradual transformation of the conservative systems into liberal ones. Another example is when individual cases are judged unfairly due to imbedded nationality biases reflecting the nationality of software developers -- a case of a local criminal fully exonerated despite being guilty of a crime against a foreigner. The more automation and less human oversight, the more serious the problem of imbedded biases becomes.

(2) disruption of the entire economic, social and justice systems -- by way of making standard professions of legal experts including judges nearly redundant while increasing the need for everyone in the entire society to become a technology expert. This could have rapid effects on a massive scale which neither the educational system, nor the human population and demography can handle fast enough. Humans are inherently born with preset affinities and inclinations, abilities, preferences, strengths and weaknesses. Not everyone can become tech-savvy even if they wished to. Moreover, not everyone wishes to. A judge would much rather be a judge than deal with computer system hardware and software issues or keep learning new interfaces of various new applications to use. Yet, unless they do learn for themselves, as more and more automation and AI-driven solutions are being implemented in the court system, a judge would become fully dependent on the so-called technology experts. At the same time, with parallel growth of technology and AI use in every other sector of the economy, the society would ultimately be unable to satisfy the demand for the huge number of technology experts required everywhere for everything. The shortage of tech-savvy experts would result in the need for judges and other legal professionals to start handling technology problems on their own and learn to understand for themselves the computer systems and application they need to use which would force them to waste immense amount of time on computer science matters including frustrating trouble-shooting problems rather than being occupied with the much more pleasurable, rewarding and useful legal work they love to do. Ultimately, human civilization may collapse due to the lack of inborn ability or desire in every human to learn and constantly deal with AI and computer system advanced technology subjects. In the legal court system specifically, judges, lawyers and other involved legal experts may completely lose ability to oversee the automated and AI processes which may result in horrible injustices without any human recourse to solving the issues. Basically, the AI would be doing as it pleases simply using human names on documents and court decisions produced without those humans having any idea what their identity is being used for, nor having any viable way of control and oversight. The rush to make everyone's job easier could ultimately get everyone's job unbearable. Worse, the justice system may become converted into a "black box" of intractable and uncontrollable processes. Human population as a whole may lose any access to a true justice system.

What should policy makers beware of most?

#1 Profiteering.  For as long as the economy of the entire human civilization is run on the idea of "profits" and tech entrepreneurs operate in an investment culture of "scalable solutions" for ever larger "profits" and return on investments, there would inevitably be false advertisements, overconfidence, cover-ups of problems and issues, lies, deceptions and even crimes of most egregious kind. And when this affects the justice system itself, who is going to stop the ruthless profiteers? A legal case? A good judge? Well, who developed the software used for even allowing a case to be "admissible", then for the judge or jury selected, the evidence edited and ultimately, the judgement prepared? In whose interests is the AI likely to show bias for?

#2 Competition and global pressure to follow the technology revolution fast enough and without sufficient due diligence to avoid "falling behind" or losing "the competition game" in the global economic and development drive. Policy makers are literally being shamed if they do not rush to implement every new gadget, application or AI solution presented by the technocrats as the latest fad and magic to solve all problems. There is no sufficient time given to conduct due diligence tests and oversight by a large segment of the population of intended future users. The over-reliance on "leading global experts" in a particular field ultimately leads to a dependence on a very narrow group of people as only so many could ever be ranked as "leading global experts". This leads to nearly blind adoption under pressure of technology that no one really fully understands and no one is allowed to criticise for "lack of expertise" -- neither the policy officials who have to sign on decisions to proceed, nor the general public who ultimately is being forced to become users without choice.

#3 Fraud. The good old-fashioned deception tactics utilized for luring into purchasing a product under false advertisement. Like selling a love potion or a facial cream that could keep someone forever young. The technology sector has reached such lack of oversight and control coupled with investor pressure to release new products and scale up quick for large profits, that it has become a natural hub and breeding ground for false advertisements, fraud and abuses of various kinds driven by one goal only -- profits, more and more, faster and faster. Indeed, isn't profiteering already imbedded into the AIs themselves? Even if humans step back or try to slow down the process, who is going to stop the autonomous AIs already developed and running ahead on their own? AIs capable of using human names and identities without any human oversight or control? Are most of the initial developers even still alive considering that most of these projects are "top secret"? And does it matter any longer? The possibility of deception is so great that it is a duty of policy makers to stop and think very carefully before rushing into allowing yet another "tech solution" to take over their entrusted societies.

What could be done better?

  1.  Forget about "profiteering" in matters of civilizational human survival.

  2.  Forget  about "competition" and rushing to be first adopters.

  3.  Forget about "leading global experts" and run own testing and due diligence procedures within own population.


References

  1. Aletras, N., Tsarapatsanis, D., Preoţiuc-Pietro, D. and Lampos, V. (2016) ‘Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective’, PeerJ Computer Science, 2, pp. e93. https://doi.org/10.7717/peerj-cs.93

  2. Axpe, M. R. V. (2021) 'Ethical Challenges from Artificial Intelligence to Legal Practice', Lecture Notes in Computer Science: Springer International Publishing, pp. 196-206. https://doi-org.liverpool.idm.oclc.org/10.1007/978-3-030-86271-8_17 

  3. Bench-Capon, T., Araszkiewicz, M., Ashley, K., Atkinson, K., Bex, F., Borges, F., Bourcier, D., Bourgine, P., Conrad, J. G., Francesconi, E., Gordon, T. F., Governatori, G., Leidner, J. L., Lewis, D. D., Loui, R. P., Mccarty, L. T., Prakken, H., Schilder, F., Schweighofer, E., Thompson, P., Tyrrell, A., Verheij, B., Walton, D. N. and Wyner, A. Z. (2012) 'A history of AI and Law in 50 papers: 25 years of the international conference on AI and Law', Artificial Intelligence and Law, 20(3), pp. 215-319. https://doi.org/10.1007/s10506-012-9131-x 

  4. Bench-Capon, T. and Atkinson, K. (2022) 'Using Argumentation Schemes to Model Legal Reasoning', arXiv:2210.00315v1 [cs.AI].  https://doi.org/10.48550/arXiv.2210.00315

  5. Chatziathanasiou, K. (2022) 'Beware the Lure of Narratives: “Hungry Judges” Should Not Motivate the Use of “Artificial Intelligence” in Law', German Law Journal, 23(4), pp. 452-464. https://doi.org/10.1017/glj.2022.32 

  6. Collenette, J., Atkinson, K. and Bench-Capon, T. (2023) 'Explainable AI tools for legal reasoning about cases: A study on the European Court of Human Rights', Artificial Intelligence, 317, pp. 103861. https://doi.org/10.1016/j.artint.2023.103861 

  7. Cui, Y. (2019) Artificial Intelligence and Judicial Modernization. 1 edn.: Springer Singapore. https://doi.org/10.1007/978-981-32-9880-4 

  8. Kazim, E. and Koshiyama, A. S. (2021) 'A high-level overview of AI ethics', Patterns, 2(9), pp. 100314. https://doi.org/10.1016/j.patter.2021.100314

  9. Mommers, L. (2005) 'Legitimacy and the Virtualization of Dispute Resolution', Artificial Intelligence and Law, 13(2), pp. 207-232. https://doi.org/10.1007/s10506-006-9012-2 

  10. Pietrosanti, E. and Graziadio, B. (1999) Artificial Intelligence and Law, 7(4), pp. 341-361. https://doi.org/10.1023/A:1008304118095 

  11. Rissland, E. L., Ashley, Kevin D. and Loui, R.P. (2003) 'AI and Law: A fruitful synergy', Artificial Intelligence, 150, Issues 1–2, pp. 1-15. https://doi.org/10.1016/S0004-3702(03)00122-X 

  12. Sartor, G., Araszkiewicz, M., Atkinson, K., Bex, F., Van Engers, T., Francesconi, E., Prakken, H., Sileno, G., Schilder, F., Wyner, A. and Bench-Capon, T. (2022b) 'Thirty years of Artificial Intelligence and Law: the second decade', Artificial Intelligence and Law, 30(4), pp. 521-557. https://doi.org/10.1007/s10506-022-09326-7 

  13. Zhong, H., Xiao, C., Tu, C., Zhang, T., Liu, Z. and Sun, M. (2020) 'How Does NLP Benefit Legal System: A Summary of Legal Artificial Intelligence', Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, pp. pages 5218–5230, Online. https://doi.org/10.18653/v1/2020.acl-main.466

Footnotes

  1. https://emerj.com/ai-sector-overviews/ai-in-law-legal-practice-current-applications/; accessed on 23 March 2023

  2. https://venturebeat.com/datadecisionmakers/the-advantages-and-disadvantages-of-ai-in-law-firms/; accessed on 23 March 2023

  3. https://businesslawtoday.org/2022/02/how-ai-is-reshaping-legal-profession/; accessed on 23 March 2023

  4. http://www.iep.utm.edu/ethics  (Internet Encyclopedia of Philosophy); accessed on 23 March 2023


© SGP1979 info@sgp1979.com
GitHubLinkLink
Report abuse
Page details
Page updated
Report abuse