AI in Family Law: opportunities and risks
by liam mccarthy & christopher ragozzino, senior associates
Artificial Intelligence (“AI”) is increasingly utilised in family law practice. It can give parties guidance about the requirements of the Family Law Act 1975 (Cth) (“the Act”) to help navigate a difficult separation. Practitioners can also utilise AI to onboard clients, undertake research and prepare initial drafts of documents. AI, however, carries risks, particularly when content created by AI is inaccurate or fabricated. These errors are often referred to as hallucinations.
Understanding AI
AI works by using algorithms to recognise patterns and make predictions without being explicitly programmed for each task. AIs are currently trained on large amounts of information; and how accurately AI can make these predictions depends on the amount and type of material it is trained on – and when.
In essence, AI is guessing at what word (or letter) comes next.
The benefits
Communication
Currently, AI excels at communication. This offers a unique opportunity for separated parents who are struggling to communicate with each other to improve their communications. Parenting is hard for intact relationships; and much harder when parents have separated and are no longer on best terms. How well those parents can communicate has significant impacts on how well they can co-parent and, for those in litigation, how the court perceives their ability to coparent.
AI can reformulate terse communications into brief, informative, friendly and firm messages (also known as the “BIFF communication method”). While there are some concerns about what you may upload to AI (see below), the ability to communicate in a child focused manner will assist many parents in their litigation (and, more importantly, improve their parenting dynamic).
Drafting
Subject to some comments below, AI can produce helpful parenting and property orders or parenting plans. These can help people draft their agreement or to contemplate what type of outcome they might be happy with.
For instance, ChatGPT will offer possible parenting arrangements for any number of children (we tested up to six). AI will also offer ideas as to outcomes on property cases. This generic information can help people consider what outcome they may want; but should not replace advice on what a person may be entitled to.
The risks
AI will generally guess at an answer rather than tell you it does not know. This can make it hard to spot hallucinations; particularly where AI is providing you lengthy responses.
Users of AI need to understand what the AI is trained on. When the law changes – as it has in the past two years – the material the AI was trained on becomes dated. As an example, asking ChatGPT how to argue against another parent trying to re-open a parenting matter will result in being told to refer to the case of Rice & Asplund (1979). While the name of the case is correct, the citation is not; and the principle is now incorporated in section 65DAAA of the Act. When ChatGPT is told its argument “won” and asked about a costs order, we were told to look at sections 60B, 65D, 68F, 117 of the Act and rule 15.03 of the Family Law Rules 2004. Most lawyers would not refer to those sections in a costs application particularly as section 117 of the Act has been repealed and the reference the 2004 rules were replaced in 2021.
Relying on AI, then, would result in some embarrassment, cost and professional sanction – something too many lawyers have been experiencing:
- Dayal [2024] FedCFamC2F 1166 was a case involving a lawyer submitting a list of authorities to the Court that did not exist. The judgment reinforces that legal practitioners remain personally responsible for the accuracy of documents they file with the Court. In this case, the lawyer’s practicing certificate was varied so that he is no longer entitled to practice as a principal lawyer or handle trust money, as well as being required to undertake supervised legal practice for two years.
- In Murray v State of Victoria [2025] FCA 731, a law firm was ordered to pay the indemnity costs of the other party after documents it filed containing fabricated citations generated by AI. The Judge cited the “growing problem” of false citations in documents prepared using generative AI tools.
- The Full Court of the Federal Circuit and Family Court of Australia has said:
- In Helmond & Mariya (No 2) [2025] FedCFamC1A 163 a lawyer used AI to prepare written documents, however the documents cited cases that did not exist. The Court was clear that this action violates the obligations of a party not to mislead the Court, noting the high ethical obligations on lawyers. Citing the President of the King’s Bench Division of the High Court of Justice in the United Kingdom, the Full Court endorsed the following caution:
“Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.”
- In Mertz & Mertz (No 3) [2025] FedCFamC1A 222 a paralegal used AI to prepare a summary of argument and list of authorities. The authorities, though, did not exist. This led to the paralegal being fired, the lawyers who did not verify the work being referred to the local regulator and being required to meet the other party’s costs.
The Victorian Legal Services Board and Commissioner has also provided guidance, as follows:
- Verify all AI-generated content against verified sources. If AI cites a case, it is your responsibility to verify the case exists and, importantly, that it is applicable to the argument or issue you are trying to run;
- Avoid entering confidential client information into public AI tools. Privacy breaches may not only waive legal professional privilege, that can be disastrous for a matter, practitioners may face disciplinary action by doing so;
- Follow Supreme Court of Victoria and County Court of Victoria guidelines on responsible AI use in litigation. These set out principles to be applied by litigations, including practitioners and parties, when using AI tools, which emphasis it is the duty of a party, and a lawyer, not to mislead the Court or another party.
Caution must be taken by practitioners, self-represented litigants and users of the family law system alike when dealing with AI. Those involved in the family law system should use AI for preliminary guidance only, and not blindly follow it. Kennedy Partners is committed to enhancing the power of AI to promote a more efficient service for our clients.