Photo Credits: Photo by Mohamed Nohassi from Unsplash https://unsplash.com/photos/a-white-robot-with-blue-eyes-and-a-laptop--0xMiYQmk8g

NY Lawmakers to Ban AI from Giving Professional Advice

Lawmakers in New York have recently proposed Senate Bill S7263, which would ban AI chatbots from providing users with legal or medical advice. The bill would apply to “AI chatbots that mimic or impersonate licensed professionals such as lawyers or physicians”. Essentially, the bill is aimed at protecting people from potentially incorrect or harmful advice and holding the companies running the chatbots accountable. With many companies having added a chatbox to their websites and apps, this bill would also require companies to clarify whether their chatbot is being managed by an AI system or an actual person. 

According to recent studies, over 27% of adults have used AI chatbots at least once to get health-related information, with 17% of adults using them multiple times a month. Setting aside all the other issues with this technology, AI has a learning bias. Most of the time, AI looks at a set of data and finds the most frequently appearing information, discarding any outliers. This isn’t uncommon, as people also often look for the mean or median in sets of data. However, the issue is that AI systems tend to learn from each other, meaning that those outliers at the beginning soon disappear from the data set completely. 

For example, pretend that you have 10 shirts, 9 blue and 1 red. The first AI system will look at that data and say, “There are more blue shirts than red, so blue must be the more important shirt”. When a second AI system looks at the data from the first and sees that blue is the most important shirt, it may completely ignore the existence of that red shirt. Now, there are two AI systems that “say” blue shirts are important,”—so it must be true since multiple sources agree on the same idea. It’s easy to see how this quickly becomes an exponential problem. As the cycle continues, the information is distilled further and further as the algorithms look for outliers that aren’t there. This confirmation bias often means that information, especially related to minorities, is often discarded. This is a problem because over 36% of adults who used AI chatbots for medical advice trust that the information provided is reliable, and thus are unlikely to look for any other opinions. 

While there has been some improvement in the quality of AI chatbot responses, a lot of the information they provide is still incorrect and misleading. By allowing affected users to sue the operators of these chatbots, this bill ensures that companies think twice about the information they are providing. The bill has successfully passed the “New York Senate’s Internet and Technology Committee with unanimous support”. 

Share:

Join Our Mailing List

Recent Articles

Done Very Soon

As the war between the U.S. and Israel against Iran, American citizens are already feeling the war efforts heat up right here on U.S. soil. 

The Firing of Kristi Noem

With immigration at the forefront of the Trump administration, the Secretary of Homeland Security was leading the way for ICE and the deportation plans within

No One Wants To Fight For Israel

“No one wants to fight for Israel!” Those are the now-famously stated words by Brian McGinnis, a former U.S. Marine Corps sergeant and Green Party

Hey! Are you enjoying NYCTastemakers? Make sure to join our mailing list for NYCTM and never miss the chance to read all of our articles!