A lawyer in Canada faces an investigation for allegedly using ChatGPT to develop legal submissions during a child custody case at the British Columbia Supreme Court.

The documents showed that the Vancouver lawyer Chong Ke represented a father who wanted to travel abroad with his children but was locked in a separation conflict with the children's mother.

Ke Faces Law Society Probe

(Photo : SEBASTIEN BOZON/AFP via Getty Images)
This illustration photograph taken on October 30, 2023, shows the logo of ChatGPT, a language model-based chatbot developed by OpenAI, on a smartphone in Mulhouse, eastern France.

According to The Guardian, Ke allegedly asked ChatGPT to provide instances of previous case law that might be relevant to her client's situation. She submitted two of the three chatbot results developed by OpenAI to the court.

However, despite multiple inquiries, the lawyers for the children's mother could not find any documentation of the cases. Ke took a step back when confronted with the discrepancies.

Ke wrote in an email to the court that he had no idea the two cases could be erroneous. He explained that after his colleague pointed out that it could not be located, he did his research and could not detect the issues.

He added that he had no intention to mislead the opposing counsel or the court, and he sincerely apologized.

Despite the popularity of chatbots, which are educated on vast amounts of data, the programs are also prone to errors, known as "hallucinations." The mother's lawyers called Ke's actions reprehensible and deserving of rebuke since they required considerable time and expense to determine if the cases she cited were real.

The judge overseeing the case denied their request for special costs, stating that such an "extraordinary step" would necessitate a finding of reprehensible conduct or an abuse of process by the lawyer.

Justice David Masuhara wrote that citing fake cases in court filings and other materials handed up is an abuse of process and is a false statement to the court. He added that unchecked can lead to a miscarriage of justice.

According to his assessment, the opposition's legal team was well-resourced and had already prepared volumes of information for the case. He said there was no chance that the two fake cases would have slipped through.

Read Also: South Korea Sets Deadline for Protesting Doctors to Return to Work as Walkouts Burden Hospital Operations

Masuhara Says Ke Has 'No Intention To Deceive'

According to CBC News, Masuhara claimed that Ke was naive about the risks of using ChatGPT and that her actions generated "significant negative publicity." However, he also discovered that Ke tried to fix her errors.

He said that he did not find that she had the intention to deceive or misdirect and that he accepted the sincerity of Ke's apology to counsel and the court. He noted that her regret was clearly evident during her appearance and oral submissions in court.

Ke's actions are currently being looked into by the British Columbian Law Society, despite Masuhara's denial of providing special costs.

Christine Tam, a spokesperson from the Law Society, said that the Law Society has published guidelines for lawyers on the proper use of AI and expects them to adhere to the standards of conduct expected of a competent lawyer if they use AI to assist a client, even as it acknowledges the potential benefits of using AI in the delivery of legal services.

Related Article: AI Chatbots' Incorrect Responses on Election Info Pose Threat to Voters With False Information, Study Claims