As artificial intelligence (AI), such as ChatGPT, continues to evolve and become more commonplace, many trade and professional associations are turning to AI technology to enhance their operations and decision-making processes and benefit their members. However, as with any emerging technology, the use of AI by associations raises a number of important legal issues that must be carefully considered and worked through.
Intellectual property is a key legal issue that associations must consider when using AI. AI systems can generate new works of authorship, such as software programs, artistic works, and articles and white papers; associations must ensure that they have the necessary rights and licenses to use and distribute these works, and that they are transparent about who/what created such works. Take steps to ensure that AI-generated content is not, for instance, registered with the Copyright Office as the association’s own unless it has been sufficiently modified to become a product of human creation and an original work of authorship of the association. Associations also must be mindful of any third-party intellectual property rights that may be implicated by their use of AI, such as copyrights or patents owned by AI vendors, developers, or others, and ensure that they do not infringe any third-party copyright, patent, or trademark rights. Finally, as stated above, be mindful not to permit the inputting into an AI system of any confidential or otherwise-protected content (such as trade secrets or information subject to a nondisclosure obligation or the attorney-client privilege), as such content will no longer be protected and confidential.
Another legal issue to consider is discrimination. AI systems can inadvertently perpetuate bias and discrimination, particularly if they are trained on data that reflects historic biases or inequalities. Associations must ensure that their AI systems do not discriminate on the basis of race, ethnicity, national origin, gender, age, disability, or other legally protected characteristics, and must take steps to identify and address any biases that may be present in their algorithms. For instance, the use by large employers of AI systems to help screen applicant résumés and even analyze recorded job interviews is rapidly growing. If AI penalizes candidates because it cannot understand a person’s accent or speech impediment, for instance, that could potentially lead to illegal employment discrimination. While this will only become a legal issue in certain contexts (such as the workplace), the use of AI has the potential to create discriminatory effects in other association settings (such as membership and volunteer leadership) and needs to be carefully addressed.
Associations must consider the potential tort liability issues that may arise from their use of AI. If an AI system produces inaccurate, negligent, or biased results that harm members or other end users, the association could potentially be held liable for any resulting damages. Associations must therefore ensure that their AI systems are reliable and accurate, and that all resulting work product (such as industry or professional standards set by an association) is carefully vetted for accuracy, veracity, completeness, and efficacy.
Associations need to ensure that they have appropriate insurance coverage in place to protect against potential liability claims in all of these areas of legal risk. Note that traditional nonprofit directors and officers (D&O) liability and commercial general liability insurance policies may be—and likely are—insufficient to fully protect associations in all of these areas. Associations also should explore acquiring an errors and omissions liability/media liability insurance policy to fill those coverage gaps.
* * * * *
In conclusion, while the use of AI by associations presents numerous opportunities and benefits, there are a number of legal issues that need to be carefully considered before going too far down the AI path. Among other things, associations must ensure that they are transparent with their members about the use of their data, obtain necessary intellectual property rights and licenses and avoid infringing others’ rights, address any potential biases in their algorithms, protect themselves against potential tort liability claims, and secure appropriate insurance coverage to protect against these risks.
As the work of associations involves both staff and member leaders, adopting and distributing appropriate policies governing AI usage by staff, officers, directors, and committee members is critical, as is policing compliance with such policies. Similar clauses should be built into employee handbooks and contracts with staff, contractors, and members (including agreements with volunteer speakers, authors, and board and committee members).
With careful planning and attention to these issues, associations can use ever-developing AI technology to enhance their operations, programs, and activities, better serve their members, and further advance their missions.
For more information, contact Mr. Tenenbaum at [email protected].