[ad_1]
Artificial Intelligence (AI) is undoubtedly revolutionizing the workplace. More and more employers are relying on algorithms or automated tools to determine who gets interviewed, hired, promoted, compensated, disciplined, or terminated. If adequately designed and applied, AI can help employees find employment, match employers with valuable employees, and advance diversity, inclusion, and accessibility in the workplace. Yet, despite its positive impacts, AI poses new risks for employment discrimination, especially when designed or used improperly, and has become a focal point of targeted efforts by federal and state enforcement agencies and lawmakers. Employers must be smart, transparent, and knowledgeable about how they use AI in their workplaces. When used properly, AI tools could potentially make employment processes faster and more efficient, while eliminating both conscious and unconscious bias.
EEOC’s Concerns with Artificial Intelligence – Recruitment and Hiring
The use of AI in the workplace has been on the radar screen of federal regulators such as the U.S. Equal Employment Opportunity Commission (EEOC), for several years. The EEOC is now taking aim to develop technical assistance, guidance, audit tools, or other parameters to ensure that AI is developed, understood, and used responsibly. AI is a “priority” subject matter in the EEOC’s 2023-2027 Strategic Enforcement Plan (SEP), which we recently summarized. The EEOC has signaled that, for the first time, it will take into account “employers’ increasing use of automated systems, including artificial intelligence or machine learning,” to make hiring and recruiting decisions. The EEOC plans to scrutinize employers’ “use of software that incorporates algorithmic decision-making or machine learning, including artificial intelligence; use of automated recruitment, selection, or production and performance management tools; or other existing or emerging technological tools used in employment decisions.” This is a warning shot for employers to be deliberate and cautious when employing new technologies to assist with decision-making.
One of the EEOC’s priorities is eliminating barriers in recruitment and hiring. The use of AI, in the EEOC’s view, may result in discrimination in recruitment and hiring in three specific ways:
- First, the potential that AI is used unlawfully to “target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.” The EEOC cited an example where it believed a company specifically programmed its application software to automatically reject applicants over a certain age.
- Second, the potential that use of AI results in “restrictive application processes or systems, including online systems that are difficult for individuals with disabilities or other protected groups to access.” The EEOC previously issued guidance, “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees” (dated May 12, 2022), which is designed to help private employers comply with the Americans with Disabilities Act (ADA) when using AI. The EEOC claims that ADA liability can arise in three cases: (i) the employer fails to provide a reasonable accommodation necessary for an individual to be rated fairly and accurately by the tool, (ii) the tool screens out an individual with a disability even though the individual can do the job with a reasonable accommodation, or (iii) the tool violates the ADA’s restrictions on disability-related inquiries and medical examinations.
- Third, the risk that the use of AI tools “disproportionately impact workers based on their protected status.” Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act (ADEA), and the ADA have long proscribed selection procedures such as pre-employment tests, interviews, and promotion tests that disparately impact workers based on their protected status. The EEOC intends to prioritize its scrutiny of selection tools that use AI, applying long-standing uniform hiring and selection guidelines, and more modern criteria or guidelines that may need to be formulated.
What’s Next?
The EEOC conducted a public hearing on January 31, 2023, entitled “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier.” The hearing is part of the EEOC’s ongoing “Artificial Intelligence and Algorithmic Fairness Initiative” in which the Commission reportedly seeks to ensure that technological resources are used in a manner to further the interests of accessibility, diversity, equity, and inclusion. The EEOC ultimately seeks to “guide employers, employees, job applicants, and vendors to ensure that these [AI] technologies are used fairly and consistently with federal equal employment opportunity laws.” According to the EEOC Chair, Charlotte A. Burrows, the Commission is currently evaluating just how exactly to do that. The EEOC intends to “gather additional information, educate stakeholders about the use of AI tools, and combat algorithm discrimination where they find it.”
The EEOC is not the only federal agency with workplace technology concerns on the horizon. We previously alerted about the NLRB General Counsel’s plan to crack down on electronic monitoring in the workplace based upon concerns of infringement upon employees’ rights to engage in protected concerted activity. If successful, the General Counsel’s new framework could potentially slow the growth of “smart” workplaces in the United States.
Many states have also already tackled the issue of surveillance and other privacy concerns in the workplace driven by the use of technology, and we expect that trend will continue to develop. Some state and local governments have implemented or proposed robust laws and rules targeting automated employment decision tools which could potentially shift the legal landscape even further. As a result, it is important for employers to be aware of state and local laws regarding the use of AI, electronic monitoring, and other technologies.
Employer Best Practices for Using AI
Ultimately, the legal issues and potential liability associated with the use of AI in employment decisions will continue to emerge as the technologies become more advanced. That being said, as the legal implications remain somewhat unknown, there are a number of best practices employers can follow to manage the risk of AI tools:
(1) Know Your Data. It is important that employers exercise caution when developing, applying, and modifying data to train and operate AI used for employment decision-making. Incomplete or erroneous data will negatively impact the AI tool’s machine learning. Ask vendors about the technology being used and ensure you understand the algorithms and mechanics behind the automated processes.
(2) Disclose the Topics and Methodology. Be transparent and explain how AI is utilized with applicants and employees, as this will foster trust, credibility, and, as a result, a greater appreciation of the merits of AI systems.
(3) Consider Undergoing a Bias Audit. Monitor and audit AI uses and processes to proactively identify intentional misuse or potential discriminatory outcomes.
(4) Implement Human Oversight. Discern the point at which humans must be involved in the employment decision-making process. Employers should assign a team to oversee the processes and results of AI tools to ensure they are performing their legitimate objectives, and also avoiding discriminatory outcomes.
(5) Review Vendor Agreements. Carefully review vendor agreements that provide automated-decision systems to ensure vendors attest to the fairness and integrity of the AI tool.
Take Note: This is just a glimpse of the interaction between artificial intelligence and anti-discrimination laws, not an exhaustive summary, and this topic is a moving target. Employers should work with their Akerman Labor & Employment Law attorneys to ensure their policies and practices are up to date.
[ad_2]
Source link