In a decisive move with far-reaching implications, the Gujarat High Court has prohibited judges and court staff from using artificial intelligence in judicial decision-making, including drafting orders and judgments, while holding them personally accountable for any AI-assisted outputs, marking a strong assertion of human accountability in the justice system.
The newly notified policy applies across the High Court and subordinate judiciary, extending to judicial officers, legal assistants, interns, and even para-legal volunteers. It disallows AI from influencing findings of fact or law, evaluating evidence, or shaping judicial outcomes, even if such outputs are later reviewed by a judge. While acknowledging the growing role of technology, the Court has drawn a clear boundary: AI may assist in limited preparatory tasks such as legal research, drafting refinement, or administrative automation, but only under strict human supervision and independent verification from authoritative legal sources. The policy also imposes stringent restrictions on feeding sensitive or confidential case data into public AI tools, citing risks of data misuse and systemic bias.
The Court emphasised that AI must not dilute the essence of judicial reasoning, explicitly stating that it “shall never be employed for any form of decision-making… or any substantive adjudicatory process.” It further made accountability non-negotiable, clarifying that once an AI-assisted output is signed, “the use of AI does not constitute a defence to a finding of error, misconduct, or professional negligence.” Violations, the policy warns, will invite disciplinary action alongside potential civil or criminal consequences under existing laws.
Publish Your Article
Campus Ambassador
Media Partner
Campus Buzz
LatestLaws.com presents: Lexidem Offline Internship Program, 2026
LatestLaws.com presents 'Lexidem Online Internship, 2026', Apply Now!