Why get “interested” in Artificial Intelligence – by Deborah Levine

Originally published in the Chattanooga Times Free Press

It was super exciting to be invited to the annual conference of Project Voice which has taken place in Chattanooga, home to the country’s fastest internet since 2016. Project Voice looks at artificial intelligence, but not the traditional artificial intelligence (AI) that analyzes historical data and makes future numeric predictions. Rather, the focus is on conversational and generative AI, terms I hadn’t heard until sitting in the conference. Conversational AI can hold two-way interactions with humans by understanding and responding in text or speech. Generative AI can take prompts and create all kinds of stuff that never existed before but are indistinguishable from human-generated content. I started to feel overwhelmed by this new information. 

But then I learned that I’m actually a consumer of conversational AI, also called “natural language-based artificial intelligence”, when trying to return products on a website. And I learned that it was generative AI that I’d used to create my video stories for kids, Bunny Bear Adventures in Diversity Land. Go figure! 

That hardly makes me an expert, so I asked the event’s emcee, Bradley Metrock, why he personally invited me. Bradley is a well known conversational AI thought leader, CEO of Project Voice and convener/designer of these annual conferences. The tall guy smiled down at me and said, “Because I saw you online and you’re interesting.” Right back at you, Bradley!

Bradley had already introduced Mayor Tim Kelly along with EPB’s J.Ed Marston. So I was determined to pay extra attention as we got underway with keynoter, Merve Hickok. One of the world’s leading AI ethicists, Merve serves as President and Research Director of the Center for AI and Digital Policy in Washington DC. The Center’s goal is to ensure that the AI world promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law. 

I’m always delighted to hear from women who are leaders in the technology world. And I told her so during the break. Yes, I was an IT director back in the 1980s, but so much has changed since then. I could feel my brain trying to expand and digest current ethical issues. 

I heard how the SEC (U.S. Securities and Exchange Commission) has charged investment advisors with making false and misleading statements about AI.  And that 5 new federal agencies have joined the Justice Department in at the pledge to enforce civil rights laws in AI. Europe has legislation about high risk AI systems, listing products banned from sales in Europe. The United Nations has a resolution regarding requirements and principles. The goal is for AI technology to be safe and trustworthy. 

Since public trust is iffy this election year, it’s vital to build guard rails. For example, campaign robocalls including prerecorded voice calls are prohibited to cell phones, pagers or other mobile devices without prior consent. And did you know that Tennessee was the first state to criminalize AI impression of voices without permission? That legislation was influenced by singers in Nashville and named the “Elvis Act”

To be trustworthy, Merve recommended that large corporations pilot test new AI technology rather than install it all at once. This will allow corporations to track new applications and learn from mistakes. 

But the most memorable takeaway from this discussion was how we need to educate users and bring them into AI discussions. Consumers are the guardrails for the future when agent-based systems emerge. Unlike large corporations, these smaller AI systems are downloadable and difficult to track. Stay alert! Be interesting and interested as consumers. We’re the regulators of the future. That’s why I was invited, and now, so are you. 

Editor-in-Chief

Leave a Reply

Your email address will not be published. Required fields are marked *