AI IN LEGAL STUDIES: NAVIGATING THE PROSPECTS AND HURDLES FOR LAW FACULTY IN HIGHER EDUCATION

Julio A. Alvarado-Vélez

Universidad Nacional de Chimborazo (UNACH)

Riobamba, Ecuador

 julio2alvarado@gmail.com

 

Autor para correspondencia: julio2alvarado@gmail.com

Recibido: 26/11/2024        Aceptado: 18/04/2025   Publicado: 07/07/2025

ABSTRACT

This paper examines the influence of Artificial Intelligence (AI) on legal education, focusing on its advantages as well as the ethical and pedagogical challenges it introduces in the university training of future legal professionals. The aim was to evaluate how AI can reshape legal education without compromising the ethical and pedagogical integrity of the learning process. Using a qualitative methodology with a documentary design, a content analysis of various educational AI tools was performed, assessing elements like personalized learning, accessibility, automated feedback, and usability. Findings suggest that AI enables personalized learning and optimizes real-time feedback and assessment; however, it also presents risks such as algorithmic bias and restricted accessibility. Furthermore, AI use may alter classroom dynamics and reduce direct engagement with professors, potentially affecting students’ ethical growth. In summary, while AI offers considerable potential for legal education, its implementation requires active oversight and a strong ethical framework to ensure inclusive and equitable education, maintaining quality and pedagogical standards in legal learning.

Keywords: artificial intelligence, legal education, legal profession.

IA EN LOS ESTUDIOS DE DERECHO: NAVEGANDO POR LAS OPORTUNIDADES Y OBSTÁCULOS PARA EL PROFESORADO EN LA EDUCACIÓN SUPERIOR

RESUMEN

Este artículo examina la influencia de la Inteligencia Artificial (IA) en la educación jurídica, enfocándose en sus ventajas y en los desafíos éticos y pedagógicos que introduce en la formación universitaria de futuros profesionales del derecho. El objetivo fue evaluar cómo la IA puede transformar la educación jurídica sin comprometer la integridad ética y pedagógica del proceso de aprendizaje. Utilizando la metodología cualitativa con un diseño documental, se realizó un análisis de contenido de diversas herramientas educativas de IA, evaluando elementos como el aprendizaje personalizado, la accesibilidad, la retroalimentación automatizada y la usabilidad. Los hallazgos sugieren que la IA permite un aprendizaje personalizado y optimiza la retroalimentación y la evaluación en tiempo real; sin embargo, también presenta riesgos como el sesgo algorítmico y la accesibilidad limitada. Además, el uso de la IA puede alterar la dinámica en el aula y reducir la interacción directa con los profesores, lo que podría afectar el desarrollo ético de los estudiantes. Se concluye que aunque la IA ofrece un potencial considerable para la educación jurídica, su implementación requiere supervisión activa y un sólido marco ético para asegurar una educación inclusiva y equitativa, manteniendo los estándares de calidad y valores pedagógicos en el aprendizaje del derecho.

Palabras clave: inteligencia artificial, enseñanza jurídica, profesión jurídica.

1. INTRODUCTION

Artificial Intelligence (AI) has profoundly transformed multiple sectors of society, and education is no exception (An et al., 2024). In the field of legal education, AI represents a highly potential tool that can innovate teaching methodologies, facilitate learning, and promote more equitable access to legal training. As technologies advance, educational environments face growing pressure to adapt and leverage these tools and law teaching in universities is not exempt from this shift (Tzirides et al., 2024).

For law professors, the integration of AI offers opportunities to enhance teaching quality, broaden educational reach, and better prepare students for an increasingly technological professional environment (Stöhr et al., 2024). However, the use of AI also raises various ethical, pedagogical, and methodological challenges, requiring a thorough reflection on how these technologies can and should be utilized in the law classroom (Fu & Weng, 2024).

Legal education currently faces specific challenges that AI could help address. On one hand, the growing volume of legal information and the complexity of modern jurisprudence demand teaching methods that prepare students not only to handle large amounts of information but also to develop analytical and critical reasoning skills (Doğan et al., 2024). On the other hand, traditional legal education has been criticized for its rigidity and reliance on theoretical, memorization-focused methods, often neglecting practical skills and personalized learning (Grimes, 2020). In this context, AI can offer innovative solutions, from tools that allow for personalized student learning to programs that facilitate real-case analysis or simulate complex legal situations.

AI use in legal education can manifest in various forms, adapting to the specific needs of law professors and students. Personalized learning platforms, which use algorithms to adjust content and learning pace to individual student needs, are one of the most prominent applications. These tools can assist professors in identifying areas of difficulty among their students and providing targeted pedagogical solutions (Hashmi & Bal, 2024). Furthermore, AI systems can support teaching by automating repetitive tasks, such as grading and managing scores, enabling professors to dedicate more time to teaching and personalized student support (Parker et al., 2024).

However, using AI in law education is not without challenges and risks. One primary issue is the potential depersonalization of education, where the focus on algorithms and technology may reduce the role of human interaction, an essential part of training future lawyers (Alexander et al., 2024). Legal education goes beyond technical knowledge transmission; it includes an ethical, critical, and practical dimension that can only be conveyed through direct, personal interactions between professors and students. This aspect is crucial since lawyers need not only legal knowledge but also communication skills, professional ethics, and a deep understanding of the law’s role in society. Excessive reliance on AI tools could ultimately limit the development of these skills in law students.

Furthermore, AI use in education raises significant ethical questions (Kajiwara & Kawabata, 2024). AI operates on algorithms and datasets that, while technically advanced, are not free from bias (Vetter et al., 2024). Personalized AI-driven learning, for instance, can generate inequalities if algorithms fail to consider contextual differences adequately or if the data used to train the system contain biases. For the legal field, where fairness and justice are fundamental values, any form of bias in education is particularly problematic. Therefore, implementing AI in law teaching requires not only a technical focus but also continuous, careful evaluation of potential biases, ensuring that these tools do not reproduce or amplify existing educational inequalities.

The effectiveness of AI in legal education also heavily depends on faculty training and adaptability. Law professors, largely accustomed to traditional teaching methods, may encounter difficulties integrating these new technologies into their daily practice (Onwuachi-Willig, 2023). Faculty training in AI tools and institutional support for their adoption are key factors influencing the success of this transformation (Cantatore, 2019). At the same time, professors must recognize these tools' limitations and understand that AI is a complement, not a substitute, for human teaching (Pahi et al., 2024). Implementing AI in the law classroom requires a balance that allows for technology's advantages without compromising pedagogical quality or the teacher-student relationship.

Amid this context of opportunities and challenges, it is essential to analyze how AI can be optimally used in law education. This article seeks to address AI’s utility and potential for transforming university-level legal education, as well as the difficulties and ethical dilemmas it poses. Through a review of current tools and a critical analysis of their implications, this study aims to provide a clear and balanced perspective on AI’s application in the law classroom, contributing to a deeper understanding of this technology and its impact on training future jurists.

2. METHODOLOGY

To address the established objective, the methodology used in this study was qualitative with a documentary design, allowing for an in-depth analysis of the impact of artificial intelligence tools on law teaching for university professors. This methodological approach sought to provide a detailed understanding of the pedagogical and ethical aspects associated with AI use in legal education, focusing on how these tools can support learning and to what extent their implementation might challenge certain fundamental educational principles.

Data collection was conducted through a detailed content analysis of documentary sources, which included academic and scientific studies, case reports, and technical descriptions of educational AI tools currently used in universities. This process was supplemented by gathering supporting materials from major educational technology providers and AI developers that offer adaptive and personalized platforms in legal education. In addition, best practice guides for AI use in education and ethical policies published by educational institutions and international organizations were reviewed.

Perplexity AI, You.com, and Google Bard were selected for this study. The methodological selection of these AI tools is based on convenience sampling and a non-probabilistic approach, suitable in contexts where sample comprehensiveness is neither feasible nor necessary (Sexton, 2022; Zickar & Keith, 2023). By opting for this type of purposive sampling, the study prioritized tools that exhibit specific characteristics aligned with the research objectives, including adaptability, pedagogical support, and available evidence of their effectiveness in higher education.

Furthermore, the choice of these support tools in legal education is based on their capacity to offer personalized and adaptive assistance through natural language models, which enhance the understanding and analysis of complex topics. It is noteworthy that these tools promote equity in access to advanced learning resources without incurring additional costs. Their selection is further justified by their effectiveness in processing large volumes of legal information, ability to provide reliable references, and adaptability to the academic context, thus enabling a personalized educational experience.

In parallel, content analysis was applied to the specific AI tools previously selected, considering relevant functionalities such as personalization algorithms, automated feedback systems, and accessibility and adaptability options. This analysis involved a review of interfaces, customization capabilities, and data security—critical aspects to ensure that AI use in law classrooms does not compromise education quality or equitable access.

Categories were structured based on the research objectives for content analysis, including personalization, automated feedback, usability, accessibility, and algorithmic biases. The initial documentary review facilitated the creation of a conceptual framework that guided the analysis of each educational AI tool. These categories allowed for an in-depth examination of how each AI feature influenced students’ educational experience and the pedagogical work of law professors.

The analysis also focused on identifying whether the algorithms exhibited biases in learning personalization, which could impact educational equity. Each tool’s adaptability to different learning styles was assessed, considering the diversity of students in the classroom. This was crucial to understand whether the platforms adhered to the principle of pedagogical justice and whether their use equally contributed to the development of competencies in all students.

Finally, the data obtained were interpreted through a qualitative analysis based on hermeneutic techniques, contextualizing the results within a theoretical and ethical framework. Each finding was evaluated against the reviewed literature, enabling a critical interpretation of the impact of AI tools in legal education. Grounded theory was employed to identify emerging patterns and key themes in the results, building a theoretical structure that provided a reflective analysis of the benefits and challenges of AI in law teaching.

3. RESULTS

3.1 Brief characterization of AI tools adaptable to the academic legal context

Perplexity AI is a free, AI-assisted search tool that allows users to ask complex questions and receive detailed answers with references to reliable sources (Daungsupawong & Wiwanitkit, 2024). In the legal field, it can be useful for students and professors researching case law, legal articles, and specific doctrines, providing a personalized learning experience.

You.com is an AI-powered search engine offering a free interactive chat assistant (Tisman & Seetharam, 2023). Law students and scholars can use this tool to obtain answers to legal questions, research case law, and receive writing assistance, adapting to the user’s needs.

Google Bard, Google’s free conversational AI, enables users to make complex inquiries and receive structured responses (Daraqel et al., 2024). It can support legal education by answering questions about legal concepts, offering examples and references, and facilitating access to up-to-date legal information.

3.2 Personalization of learning and accessibility

One of the most notable findings from the content analysis of artificial intelligence tools applied to law teaching was the capacity for personalized learning they offer. The tools analyzed use algorithms that adjust content and teaching pace according to each student’s progress and needs. This personalization capability allows students to receive an education more tailored to their skills and knowledge, promoting a more inclusive and efficient learning experience.

Personalized learning in the legal context represents a significant advancement in legal pedagogy, as it addresses the heterogeneity in students’ preparation levels and learning styles. However, this adaptability presents certain ethical and pedagogical risks. The algorithms’ ability to personalize learning heavily depends on the quality and breadth of the data with which they have been trained (Shoaib et al., 2024). This creates a risk of algorithmic bias, where students with characteristics differing from those in the dataset may receive less effective learning experiences. Additionally, although AI can enhance educational efficiency, there is concern that individualized learning might reduce the collective and collaborative dimension essential in lawyer training, by minimizing opportunities for group discussion and shared learning (Lokare & Jadhav, 2024).

3.3 Automation of feedback and assessment

Another key finding was the automation of feedback and assessment. The AI tools analyzed could provide instant feedback on students’ responses, using automated systems to assess knowledge, analyze answers, and correct common errors in legal reasoning. This enabled continuous, real-time evaluation that helped students identify their strengths and areas for improvement promptly.

While instant feedback offers considerable advantages, the use of AI for automatic assessment in the field of law presents significant limitations. Legal education not only involves learning rules and procedures but also the development of critical argumentation skills, ethics, and the contextualization of specific cases (Fest et al., 2022), which are challenging to capture and assess through algorithms. Automated assessment, while useful for technical or regulatory knowledge, may be insufficient for evaluating the quality of arguments or the understanding of complex ethical and social principles underlying the law (Battelli, 2020). Additionally, reliance on automated feedback may lead students to depend excessively on these systems, potentially reducing their capacity to develop autonomous critical judgment, an essential aspect in the training of future lawyers (Zhai et al., 2024).

3.4 Usability and accessibility of tools

The analysis of AI tool interfaces revealed that, overall, the platforms were intuitive and easy to navigate, facilitating use by both students and professors. However, some accessibility barriers were identified, particularly for students with visual or hearing disabilities or those with limited access to high-quality technological devices. Although the tools studied included certain accessibility features, such as automatic captions and contrast adjustment options, their implementation was not always optimal for all users.

Accessibility is a fundamental pillar of inclusive education and should be a priority in any educational AI tool (Summers et al., 2024). While AI platforms offer intuitive usability that facilitates access to information (Yue Yim, 2024), their implementation still faces challenges in ensuring equity in access to legal education. The lack of complete accessibility not only limits the learning opportunities for students with disabilities but also contradicts the principles of justice and equity that law promotes. It is essential for AI tools to include comprehensive accessibility features to ensure that all students can equally benefit from their pedagogical advantages. Moreover, reliance on high-performance technological devices presents an additional barrier for students from diverse socioeconomic backgrounds, potentially increasing inequalities in access to quality legal education (Lavalle, 2020).

3.5 Algorithmic bias and educational equity

Another significant finding was the presence of algorithmic biases in AI tools. These biases were identified in the way algorithms interpreted responses and in the learning recommendations they provided. The biases stemmed from the datasets used to train the tools, which did not always reflect the diversity of students in terms of skills, cultural background, or socioeconomic context. This could result in a less effective learning experience for certain student groups.

The identification of algorithmic biases in AI tools raises a critical concern about equity and justice in legal education. The presence of biases in algorithms can reinforce existing inequalities and reduce opportunities for effective learning for students from diverse backgrounds (Suresh, 2023). In the field of law, where equity is a core value, any bias in education could have significant repercussions on the training of professionals. To mitigate these biases, it is essential for AI developers to use more inclusive and representative datasets and to implement regular audits to detect and correct potential biases in algorithms. Additionally, the use of AI in education should be accompanied by a pedagogical approach that acknowledges the limitations of algorithms and offsets any lack of equity with additional support (Lee et al., 2024).

3.6 Impact on the teaching role and legal pedagogy

Finally, the analysis revealed that AI usage has a profound impact on the teacher's role and on legal pedagogy in general. Law professors who use AI tools can focus on higher-value tasks, such as individualized mentoring and developing students’ practical skills, as AI takes on repetitive tasks or assessment duties. However, the implementation of AI also shifts classroom dynamics, as students may become overly reliant on technology, reducing their direct interaction with professors.

The transformation of the teaching role presents both opportunities and challenges in legal education. AI allows educators to concentrate their time and effort on areas where their expertise is irreplaceable, such as developing critical thinking and ethical skills in students (Walter, 2024). However, technological reliance could diminish the human and ethical dimension of education, which is crucial for the comprehensive training of lawyers (Zhao et al., 2024). Direct interaction with instructors enables students to understand not only the technical framework of the law but also its social, ethical, and cultural dimensions. Therefore, the integration of AI in legal education must strike a balance, allowing educators to utilize technology without it replacing the personal interaction and ethical guidance they provide.

4. CONCLUSIONS

This study demonstrated the transformative potential of artificial intelligence in legal education, highlighting how its use can personalize learning, improve feedback efficiency, and redefine the role of instructors in the university setting. However, the results also underscore significant ethical and pedagogical challenges that must be addressed for this implementation to be truly inclusive and equitable. Personalization, though beneficial, poses risks of algorithmic bias that could affect the quality of education for certain student groups, limiting equity in access to legal training. Additionally, while automated feedback and assessment are valuable for technical learning, they fall short in developing critical and ethical skills essential in the legal field.

Furthermore, the lack of full accessibility in some tools and the potential shift in teacher-student dynamics highlight the need to implement AI in a way that respects human interaction and maintains the ethical dimension of legal education. AI should be complemented by active oversight from instructors, who, as formative guides, play an irreplaceable role in teaching values and practical skills. Thus, the study concludes that to maximize AI's potential in legal education, its application must be accompanied by a critical and regulated approach that ensures justice and pedagogical quality for the benefit of all students.

5. REFERENCES

Alexander, J., McConnell, S., Mitchell, R., & McGrane, A. (2024). Technological challenges for modern law school pedagogy: Preparing graduates for the modern legal workplace. The Law Teacher, 58(1), 32-57. https://doi.org/10.1080/03069400.2023.2287393

An, Q., Yang, J., Xu, X., Zhang, Y., & Zhang, H. (2024). Decoding AI ethics from Users’ lens in education: A systematic review. Heliyon, 10(20), e39357. https://doi.org/10.1016/j.heliyon.2024.e39357

Battelli, E. (2020). La decisión robótica: Algoritmos, interpretación y justicia predictiva. Revista de Derecho Privado, 40, 45-86. https://doi.org/10.18601/01234366.n40.03

Cantatore, F. (2019). New Frontiers in Clinical Legal Education: Harnessing Technology to Prepare Students for Practice and Facilitate Access to Justice. Australian Journal of Clinical Education, 5(1). https://doi.org/10.53300/001c.11191

Daraqel, B., Wafaie, K., Mohammed, H., Cao, L., Mheissen, S., Liu, Y., & Zheng, L. (2024). The performance of artificial intelligence models in generating responses to general orthodontic questions: ChatGPT vs Google Bard. American Journal of Orthodontics and Dentofacial Orthopedics, 165(6), 652-662. https://doi.org/10.1016/j.ajodo.2024.01.012

Daungsupawong, H., & Wiwanitkit, V. (2024). Assessing ChatGPT and perplexity AI performance. Digestive and Liver Disease, 56(9), 1638. https://doi.org/10.1016/j.dld.2024.04.001

Doğan, E., Şahin, F., Şahin, Y. L., Kobak, K., & Okur, M. R. (2024). Enhancing clinical law education through immersive virtual reality: A flow experience perspective. Learning and Instruction, 94, 101989. https://doi.org/10.1016/j.learninstruc.2024.101989

Fest, I., Wieringa, M., & Wagner, B. (2022). Paper vs. practice: How legal and ethical frameworks influence public sector data professionals in the Netherlands. Patterns, 3(10), 100604. https://doi.org/10.1016/j.patter.2022.100604

Fu, Y., & Weng, Z. (2024). Navigating the ethical terrain of AI in education: A systematic review on framing responsible human-centered AI practices. Computers and Education: Artificial Intelligence, 7, 100306. https://doi.org/10.1016/j.caeai.2024.100306

Grimes, R. (2020). Making and Managing Change in Legal Education: Yesterday, Today and Tomorrow. Asian Journal of Legal Education, 7(2), 178-194. https://doi.org/10.1177/2322005820919258

Hashmi, N., & Bal, A. S. (2024). Generative AI in higher education and beyond. Business Horizons, 67(5), 607-614. https://doi.org/10.1016/j.bushor.2024.05.005

Kajiwara, Y., & Kawabata, K. (2024). AI literacy for ethical use of chatbot: Will students accept AI ethics? Computers and Education: Artificial Intelligence, 6, 100251. https://doi.org/10.1016/j.caeai.2024.100251

Lavalle, M. (2020). Acceso a la educación y brecha digital en tiempos de pandemia. Revista Jurídica De La Universidad De San Andrés, 10, 27-56.

Lee, J., Hicke, Y., Yu, R., Brooks, C., & Kizilcec, R. F. (2024). The life cycle of large language models in education: A framework for understanding sources of bias. British Journal of Educational Technology, 55(5), 1982-2002. https://doi.org/10.1111/bjet.13505

Lokare, V. T., & Jadhav, P. M. (2024). An AI-based learning style prediction model for personalized and effective learning. Thinking Skills and Creativity, 51, 101421. https://doi.org/10.1016/j.tsc.2023.101421

Onwuachi-Willig, A. (2023). New Frontiers in Legal Education. Boston University School of Law. https://www.bu.edu/law/record/articles/2023/legal-education-and-artificial-intelligence/

Pahi, K., Hawlader, S., Hicks, E., Zaman, A., & Phan, V. (2024). Enhancing active learning through collaboration between human teachers and generative AI. Computers and Education Open, 6, 100183. https://doi.org/10.1016/j.caeo.2024.100183

Parker, L., Carter, C., Karakas, A., Loper, A. J., & Sokkar, A. (2024). Graduate instructors navigating the AI frontier: The role of ChatGPT in higher education. Computers and Education Open, 6, 100166. https://doi.org/10.1016/j.caeo.2024.100166

Sexton, M. (2022). Convenience sampling and student workers: Ethical and methodological considerations for academic libraries. The Journal of Academic Librarianship, 48(4), 102539. https://doi.org/10.1016/j.acalib.2022.102539

Shoaib, M., Sayed, N., Singh, J., Shafi, J., Khan, S., & Ali, F. (2024). AI student success predictor: Enhancing personalized learning in campus management systems. Computers in Human Behavior, 158, 108301. https://doi.org/10.1016/j.chb.2024.108301

Stöhr, C., Ou, A. W., & Malmström, H. (2024). Perceptions and usage of AI chatbots among students in higher education across genders, academic levels and fields of study. Computers and Education: Artificial Intelligence, 7, 100259. https://doi.org/10.1016/j.caeai.2024.100259

Summers, A., Haddad, M. E., Prichard, R., Clarke, K.-A., Lee, J., & Oprescu, F. (2024). Navigating challenges and opportunities: Nursing student’s views on generative AI in higher education. Nurse Education in Practice, 79, 104062. https://doi.org/10.1016/j.nepr.2024.104062

Suresh, Dr. V. (2023). How can we manage biases in artificial intelligence systems – A systematic literature review. International Journal of Information Management Data Insights, 3(1), 100165. https://doi.org/10.1016/j.jjimei.2023.100165

Tisman, G., & Seetharam, R. (2023). OpenAi’s ChatGPT-4, BARD and YOU.com (AI) and the Cancer Patient, for Now, Caveat Emptor, but Stay Tuned. Digital Medicine and Healthcare Technology, 2. https://doi.org/10.5772/dmht.19

Tzirides, A. O. (Olnancy), Zapata, G., Kastania, N. P., Saini, A. K., Castro, V., Ismael, S. A., You, Y., Santos, T. A. D., Searsmith, D., O’Brien, C., Cope, B., & Kalantzis, M. (2024). Combining human and artificial intelligence for enhanced AI literacy in higher education. Computers and Education Open, 6, 100184. https://doi.org/10.1016/j.caeo.2024.100184

Vetter, M. A., Lucia, B., Jiang, J., & Othman, M. (2024). Towards a framework for local interrogation of AI ethics: A case study on text generators, academic integrity, and composing with ChatGPT. Computers and Composition, 71, 102831. https://doi.org/10.1016/j.compcom.2024.102831

Walter, Y. (2024). Embracing the future of Artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(1), 15. https://doi.org/10.1186/s41239-024-00448-3

Yue Yim, I. H. (2024). A critical review of teaching and learning artificial intelligence (AI) literacy: Developing an intelligence-based AI literacy framework for primary school education. Computers and Education: Artificial Intelligence, 7, 100319. https://doi.org/10.1016/j.caeai.2024.100319

Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learning Environments, 11(1), 28. https://doi.org/10.1186/s40561-024-00316-7

Zhao, Y., Zhang, L., & Hu, L. (2024). The Challenges and Responses Faced by Digital Legal Education in the Era of Big Data. Journal of Education and Educational Research, 9(1), 27-30. https://doi.org/10.54097/mj7m9436

Zickar, M. J., & Keith, M. G. (2023). Innovations in Sampling: Improving the Appropriateness and Quality of Samples in Organizational Research. Annual Review of Organizational Psychology and Organizational Behavior, 10(1), 315-337. https://doi.org/10.1146/annurev-orgpsych-120920-052946