- January 18, 2024
Changes are coming at the federal, state and local levels.
November 30, 2023 | Lisa Nagele-Piazza, SHRM-SCP
As ChatGPT and other generative artificial intelligence tools make headlines and their use continues to grow, HR is feeling the impact.
AI technology can be used to streamline workplace processes, handle remedial work and even help tackle some complicated tasks—but lawmakers are concerned that these innovative solutions may create privacy issues and lead to biased decision-making. Therefore, as HR professionals continue to explore new technologies to ease their workloads, they also must prepare for new regulations at the federal, state and local levels.
Already, New York City is rolling out the first workplace artificial intelligence law in the nation, and at press time, several other states as well as the federal government were considering AI legislation.
“It’s sort of a mad scramble at this point as we see a flurry of state activity,” says Benjamin Ebbink, an attorney with Fisher Phillips in Sacramento, Calif. “But it seems like the technology is moving faster than policymakers can keep up with.”
In the meantime, the U.S. Equal Employment Opportunity Commission (EEOC) and other federal agencies have issued guidance on how existing laws apply to new technology. But more regulations are sure to follow.
Here’s how HR professionals are adapting to changing technology and what they can expect as lawmakers begin to regulate this new frontier.
HR’s AI Use Is Evolving
Recent innovations involving artificial intelligence are reshaping how work is being done. While HR professionals have implemented various technologies over the years to simplify payroll, talent acquisition and other processes, recent advancements have allowed busy departments to automate more functions.
“The bulk of HR professionals are using AI tools for process optimization and automation,” explains Alex Alonso, Ph.D., SHRM-SCP, chief knowledge officer for SHRM. For instance, HR professionals are leveraging AI tools to screen candidate resumes and to automate benefits selection during open enrollment periods. They are also using chatbots to answer questions about benefits plan features and options regarding benefits selection.
“What we’re seeing now, though, is roughly 1 in 10 HR professionals sharing how they’re using generative AI to develop policies, job descriptions and development plans,” Alonso says. “In some cases, we’re also seeing HR professionals use generative AI to analyze key workplace trends around talent availability in their markets.”
What exactly is “generative” AI? This technology uses knowledge from existing datasets to generate new content, including photos, documents, music and more. For example, OpenAI, the creator of ChatGPT, says its GPT-4 model “can follow complex instructions in natural language and solve difficult problems with accuracy.”
This type of technology, however, is still being tested and refined, and the results it generates should be checked for accuracy.
6 Best Practices for HR to Consider
Benjamin Ebbink, an attorney with Fisher Phillips in Sacramento, Calif., says HR leaders should consider taking the following steps when using AI in the workplace:
- Start with an AI inventory to identify how the company is currently using AI.
- Be aware of any potential employment discrimination issues.
- Consider performing impact assessments or bias audits.
- Ensure compliance with applicable data privacy regulations.
- Make sure a human is reviewing AI-generated content for accuracy and making final decisions.
- Train employees on how to use AI appropriately and recognize its limitations.
State and Local Trends
As HR teams embrace new AI technology, they also need to prepare for the inevitable regulations that will follow. Notably, employers are exploring ways to remove human bias by using AI to make data-driven, fair decisions—but such technology can make biased decisions, too, depending on what information is fed into it.
Lawmakers have taken notice. For example, HR professionals in New York City need to be familiar with the city’s groundbreaking new law regulating the use of automated employment decision tools (AEDTs) to hire and promote workers in the city. Employers in the Big Apple must take the following steps if they use such tools to screen workers:
- Select an independent auditor to perform an annual bias audit of the company’s AEDTs and assess disparate impact based on race, ethnicity and sex.
- Publish the audit results.
- Provide notice to job candidates and employees who reside in New York City about the use of AEDTs
An initial bias audit must also be performed before using a new AEDT, and the results must be published 10 business days prior to using it.
Employers should be prepared to face similar requirements in other locations. For example, California’s AB 331 would have regulated AI use in the state, but the bill was put on hold for now.
Still, Ebbink notes, California’s bill may serve as a model for future legislation in the Golden State or other locations. Like New York City’s law, the bill was intended to require bias audits or impact assessments for employers that use AI.
“I anticipate that states are likely to enact legislation to require these impact assessments to ensure there is no disparate impact from the use of AI on protected categories of applicants and employees,” Ebbink says.
Notably, the California bill would have allowed people to opt out of AI use and request that a real person make decisions.
HR professionals should also watch for federal legislation. “There is lots of attention and talk about AI at the federal level—and even some legislative proposals,” Ebbink says. For now, he notes, employers may want to review the Biden administration’s Blueprint for an AI Bill of Rights, which identifies equal opportunities in employment as a key area to protect. In addition, the Biden administration has issued an executive order on AI.
“Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination,” according to the blueprint.
Guarding against any potential adverse impact discrimination seems to be a key focus of federal discussions, Ebbink says. “I think a key question will be whether we see comprehensive federal legislation or a patchwork of state regulation of AI,” he adds.
Don’t forget that existing rules still apply. “The EEOC has been the most active in this space, and employment discrimination laws are probably the first thing that comes to mind,” Ebbink says.
So, HR professionals should anticipate that federal agencies, including the EEOC, will view artificial intelligence through the same framework as their existing regulations.
“This will continue to be a hot topic for federal and state legislation and regulation,” Ebbink says, “so make sure you’re paying attention to what’s happening where you operate.”
Lisa Nagele-Piazza, SHRM-SCP, is legal content counsel for Fisher Phillips in Atlanta.
To view the original article posting, click here.