A topic so relevant that it must be addressed. Generative artificial intelligence has certainly proven to be a useful tool in various fields, but like every new technology it brings risks and dangers to watch out for, especially in the legal field.
How many of us have never used, if only out of curiosity, ChatGPT? The success of the last few months suggests a clear answer. And while it is certainly true that the quality of information provided by the tool, as well as its ability to intuit user requests, are nothing short of amazing, it is equally true that these systems are not perfect at all.
It is enough to query the OpenAI chatbot on a topic for which we have deep knowledge to realize the possible inaccuracies and misleading information provided by this tool.
But the risks for the user do not end there, as correctly pointed out by Gen Quinn (a patent attorney) in his contribution to IPWatchdog.
The issue concerns the possible lack of privacy of the data entered by users into the AI system. Will my information be used to train the linguistic model? Or shared with other users if their prompt is related to the topic I am dealing with?
Most likely, yes. Or rather, probably yes if the system used is ChatGPT itself.
This is because, as expressed by OpenAI, the data could be used to improve chatbot responses and increase the quality of the model (*). The use of these artificial intelligence tools in the legal field may therefore lead to possible violations of professional secrecy.
Artificial intelligence and the legal field
Given these premises, it is hardly astonishing that in recent weeks two technological giants, such as Samsung and Apple, have limited the use of these AI systems by their employees. Worried about the possible loss of relevant data and the plausible advantages unintentionally provided to competitors.
The secrecy of confidential information is certainly a fundamental issue for technology companies, but not only for them. Associated firms and legal representatives are, in fact, obliged to guarantee the confidentiality of the information provided by clients in the context of their work.
The use of generative artificial intelligence tools could easily speed up the processes of drafting and preparing the necessary documentation, especially if we consider the quality achieved by these systems once directed by the user towards a specific field of action – limited to the production of text based on the prompts provided.
As anticipated, however, this information could be usable by other users for the most disparate purposes, leading the represented party to even serious consequences.
Risks that competent law firms and lawyers will certainly be able to mitigate, basing their work on safe and guaranteed tools.
In conclusion, relying on experienced professionals capable of carrying out their work with full awareness, including careful use of any artificial intelligence systems, is the wisest and safest choice to protect yourself.
* It is possible to ask OpenAI not to process your data. However, the company reserves the right not to accept the request if it is not considered valid.
Image by EKATERINA BOLOVTSOVA from Pexels