As artificial intelligence makes headlines with ChatGPT, the technology as a whole has been quietly pervading everyday life, such as job and rental apartment applications and determining medical care in some cases.

At the same time, a number of AI systems have been found to discriminate, which could tip the scales in favor of certain profiles - such as race, gender, and income - while government oversight is either scarce or non-existent.

In an exclusive Associated Press report, it was revealed that lawmakers in at least seven states were taking big legislative swings to regulate bias in AI after Capitol Hill left an unfilled void due to inaction.

The proposals made by states such as California, Colorado, Rhode Island, Illinois, Connecticut, Virginia, and Vermont were some of the first steps in a decades-long discussion over balancing the benefits of AI with the widely documented risks.

"AI does in fact affect every part of your life whether you know it or not," Brown University professor Suresh Venkatasubramanian said. "Now, you wouldn't care if they all worked fine. But they don't."

Venkatasubramanian was a co-author of the White House Blueprint for an AI Bill of Rights.

The AP reported that the legislation's success or failure would depend on lawmakers' working through complex problems while negotiating with an industry worth hundreds of billions of dollars and growing at a speed best measured in light years.

Last year, only about a dozen of the nearly 200 AI-related bills introduced in statehouses were passed into law, according to BSA, The Software Alliance, which advocates on behalf of software companies.

The bills, along with the over 400 AI-related bills being debated this year, were largely aimed at regulating several specific aspects of AI, which included nearly 200 of them targeting deepfakes, especially of the pornographic kind.

Other bills are focused on trying to curb chatbots like ChatGPT to ensure they would not be able to create a security risk, like instructions to make improvised explosive devices.

Read Also: UK Government Unveils Plan for $1 Million in Extra Funding to Boost AI Research, Development

(Photo: KIRILL KUDRYAVTSEV/AFP via Getty Images)

States' Fight vs. AI Discrimination

The aforementioned bills, however, were separate from the seven state bills that would apply across industries to regulate AI discrimination through what the bills call "automated decision tools" - one of the technology's most perverse and complex problems.

According to the Equal Employment Opportunity Commission, as many as 83% of employers use algorithms to help in hiring, with almost all Fortune 500 companies doing so. However, a Pew Research poll revealed that a majority of Americans were unaware that such tools had been used, let alone whether the systems were biased.

An AI could be able to learn bias through the data it has been trained on, typically historical data that could hold a Trojan Horse of past discrimination.

For its part, Amazon scrapped its hiring algorithm after it was found to favor male applicants nearly a decade ago. The AI was trained to assess new resumes by learning from past resumes - largely from male applicants. While the algorithm did not know the applicants' genders, it still downgraded resumes with the word "women's" or listed women's colleges, partly because they were not represented in the historical data it learned from.

"If you are letting the AI learn from decisions that existing managers have historically made, and if those decisions have historically favored some people and disfavored others, then that's what the technology will learn," said Christine Webber, the attorney in a class-action lawsuit alleging that an AI system scoring rental applicants discriminated against Black or Hispanic applicants.

The bills have been targeting the lack of transparency and accountability in using AI in employee recruitment, following California's failed bid last year, which some considered to be the very first comprehensive attempt to regulate AI bias in the private sector. Some of the bills would also require companies to tell customers that AI will be used in making a decision and allow them to opt-out, with certain caveats.

BSA senior vice president of US government relations Craig Albright said that its members were generally in favor of some steps being proposed, such as impact assessments.

"The technology moves faster than the law, but there are actually benefits for the law catching up. Because then (companies) understand what their responsibilities are, consumers can have greater trust in the technology," he said.

Related Article: Humanoid Robots with AI Secure Funding from OpenAI, Bezos, and Nvidia