web iconshouse solid32   web iconsphone solid32    Login Button   Join for Free

Webinar and Ebook on the Legal and Ethical Uses of AI in Business Available to VDB Community

With the release of image generators like DALL-E and Midjourney. large language models (LLMs) like ChatGPT, whose underlying architecture powers Bing’s new search function, artificial intelligence and machine learning have moved into mainstream public consciousness. These tools have generated excitement about increased productivity and hand-wringing about the future of white collar jobs in equal measure, and businesses have rushed to incorporate AI into their operations out of fear of being left behind.

But AI is more than machine-generated images and text. Big businesses have been using AI for years in everything from automated production to inventory management to customer data collection and analysis. Many of these tools are now available to smaller businesses, as well. Such tools are worth adopting if you understand them, and the legal and ethical issues their adoption might raise, as best you can.

While we tend to imagine artificial intelligence in science-fictional (and often apocalyptic) terms—think Skynet from the Terminator films—the reality is far more mundane. But even if we don’t have to worry about machines coming to life and suddenly making decisions on their own, a poorly thought-out implementation of AI could do as much harm to your business.

In their webinar titled “Responsible AI Adoption: How to Seize AI Opportunities Ethically and Legally,” held last week, technology and business growth expert Andrea Hill and intellectual property attorney Michele Berdinis discussed the legal and ethical issues surrounding the adoption of AI. They discussed some of the legal risks and gray areas surrounding machine-generated content as well as the ethical issues involved in the training and implementation of AI tools. But rather than conclude that the entire field is just too complex and saddled with legal and ethical issues to even engage with, they outline exactly what the issues are and how best to navigate them to protect your business.

Ethical and Legal Concerns

Explainability: A large language model like GPT-4 can provide natural language answers to questions. Sometimes you can feel like you are talking to another person. But LLMs are not people, and there is no actual intelligence or sentience behind them. They are statistical models, like your phone’s autocomplete on steroids. They are trained on so much data and operate at such levels of complexity that working out precisely how they arrive at a certain answer is often impossible. This makes them less than trustworthy. When relying on a chatbot for answers, it’s always a good idea to verify those answers against another source. And when using AI for something like automated inventory management, you’ll want to know how exactly the AI is making its decisions.

A clear example of the importance of explainability involved the use of AI to determine whether an image showed a benign or malignant tumor. The AI flagged photos containing rulers as malignant. Because doctors often photograph a tumor they think might be cancerous beside a ruler, the AI essentially “learned” that there was a 100% correlation between rulers and malignant tumors. It was vital in this case that researchers could explain why and how the AI was making its decisions so that they did not come to false conclusions based on faulty reasoning on the part of the AI.

Bias and Discrimination: A lack of explainability and transparency extends beyond LLMs. Any kind of AI system is only as good as the data it is trained on and the algorithms it uses. If your tech partners can’t explain how their algorithms work, AI can lead you down the wrong path, giving not only incorrect analysis, but even leading you to do something discriminatory and illegal. For example, in 2017, the city of Rotterdam in the Netherlands began using AI to identify welfare fraud. The algorithm they used was poorly trained and singled out two groups for investigation: single mothers and refugees. The city spent time, resources, and money investigating these two demographics before realizing that its AI system was unfairly targeting two vulnerable demographics based on flawed training.

Copyright: The US Copyright Office has already determined that only works created by humans can be copyrighted. This means that any images or text you generate using AI is not truly yours in the legal sense—you don’t own it; in fact, nobody owns it. But there is a lot of gray area between “fully AI generated” and “human artist/writer assisted by AI,” and the ethics and legality of using AI generated text and image for commercial purposes haven’t yet been fully worked out.

The webinar and associated book also discuss ethical concerns about job displacement and dependence on technology, intellectual property infringement, privacy issues, misinformation, and regulatory compliance. These are all issues business owners must familiarize themselves with lest they repeat the mistakes in the above examples.

Don’t Be Afraid; Be Informed

While these issues might scare you away from AI, that is not the purpose of this article or the webinar. None of this to say that AI as a whole is dangerous or inherently unethical. In the rush to adopt new technologies, businesses simply have to equip themselves with knowledge and understand the legal and ethical pitfalls so they can ask the right questions and make informed decisions about implementation. In the future, AI-powered software for both operations and sales will no longer be a luxury but a necessity for running a modernized, competitive business. 

VDB has arranged for both a recording of the webinar and an ebook expanding on the webinar’s contents to be made available to the VDB community.

Watch the webinar: vdb.guru/ai-webinar

Download the ebook: vdb.guru/ai-book