top of page
Writer's pictureRaghav Sehgal

Generative AI for Business Leaders (Part 6/n): Risks and Ethical Considerations of Generative AI

This is Part 6 of the series "Generative AI for Business Leaders"


Generative AI models like DALL-E and GPT-3 seem magical, able to generate human-like text, images, and more with just a few prompts. However, as miraculous as these models may seem, they have very real pitfalls and limitations that must be considered before implementing them. Being aware of these issues is the first step to mitigating risk and using AI responsibly.


Oversimplified Objectives


It’s tempting to think an AI model can be pointed at any problem and create an instant solution. However, clearly defining the right objective is crucial for success. AI models will only be as effective as the goal you give them. Setting unrealistic expectations or poorly framing the objective can send your model down the wrong path. Take the time to think through exactly what you want to accomplish to set your AI project up for success.


High Computational Costs


Generating new content with AI models requires significant computing resources. Specifically, Graphics Processing Units (GPUs) are often used to run AI algorithms in parallel to increase speed. However, access to GPUs can be expensive and complex to implement. Make sure your infrastructure can support AI workloads before beginning a project. If GPU resources are limited, your model’s ability to handle large datasets or generate outputs quickly may be restricted.


Algorithm Hallucination


Unlike humans, AI models do not know when they do not have the right information to generate a valid output. Instead, they will attempt to “guess” or fabricate content. This is extremely problematic when outputs are taken as truth instead of suggestions. Always verify model outputs, and never have an AI model directly interact with end users without human checks in place.


Staleness


AI models are limited by the data they are trained on. They may fall behind in fast-changing environments as new data is not continuously integrated. Monitor model performance over time and retrain periodically with new data to prevent degradation.


Restrictiveness


Many AI models work well with text or images but cannot understand other data types. For example, a natural language model likely cannot calculate math equations or integrate spatial data. Understand exactly what data types your model can handle to avoid expecting more than it is capable of.


Interpretability


It can be difficult to diagnose why an AI model makes certain choices due to their complexity. However, interpretability is key to improving performance over time. Look for models with built-in capabilities to explain outputs or invest in specialized interpretation tools.


Token Constraints


“Tokens” are the units of data a model can process at once. Token limitations impact the length of possible inputs and outputs. Be aware of token constraints when designing projects to ensure the model can handle the full scope of data.


Lack of State


Some AI models struggle with “state” - retaining memory of past inputs to inform future outputs. For applications like dialog systems, the lack of state results in incoherent or contradictory responses. If state is required, select models with explicit state tracking or supplement with an external state store.


Data Quality and Availability


As the saying goes - garbage in, garbage out. No model can perform better than the data it learns from. Prioritize accessing high-quality, unbiased, and relevant data in sufficient volumes to train robust models [9].


Ethical Concerns


The generative capabilities of AI models open the door to creation of fake content, copyright infringement, and more that we will discuss next in the post. Strictly adhere to regulations, implement safeguards to prevent misuse, and monitor model outputs diligently.


Ensuring Ethics and Data Privacy in AI


In addition to the limitations above, responsible use of AI demands considering ethics and privacy. The unprecedented generative capabilities of AI models open the door to a range of ethical risks that must be carefully considered:


Fake content - AI models can generate convincing fake audio, video, images and text known as "deepfakes." The potential to spread misinformation by creating manipulated or completely fabricated content at scale raises huge ethical questions.


Copyright violations – Text and image generation models may end up unintentionally copying from copyrighted source material during training or inference. Companies have a duty to ensure models comply with copyright law.


Data privacy – Training generative models on personal data such as chat logs or emails risks violating user privacy if consent was not given. Data must always be anonymized and protected.


Accessibility and Inclusivity – The "digital divide" means lower income groups may lack access to generative models. There are also concerns over representation and potential bias against marginalized communities.


Malicious use – Generative models could be utilized by bad actors to create spam, phishing content, abusive media and more. Safeguards need to be in place to prevent misuse.


Mental health – Advanced generative models may have unintended psychological effects on some users, especially younger people. Mental health support and education around responsible use are important.


Accountability – When problematic content is created, companies must have procedures to investigate, take appropriate action and notify affected users in a timely manner.


To uphold ethics in the AI field, leaders must prioritize transparency, inclusivity, privacy, accountability and social responsibility in how these powerful technologies are built, deployed and governed.


You must follow these guidelines to safeguard yourself from ethical dilemma around generative AI:


Transparency - Disclose when and how AI is used, as well as details of model training.


Avoiding Bias - Proactively analyze data and model outputs for unwanted bias. Adjust data and model appropriately.


Data Privacy - Collect only necessary data, keep it secure, and be compliant with all regulations.


Specifically:


- Establish policies for protecting data that adhere to laws.

- Educate all employees handling data on security best practices.


Avoiding IP Violations - Understand and comply with all copyright and trademark laws relating to generated content.


Active Oversight - Have humans continually review AI prompts, outputs, and outcomes to catch issues. Provide mechanisms for user feedback on AI systems. Clearly explain the role of AI to end users.


By being aware of the limitations and proactively addressing the ethical implications, companies can minimize risks and harness the power of AI responsibly. The technologies may not be perfect yet, but with diligence and empathy they can be used as a force for good.



Sources:


[1] https://hbr.org/2020/02/ai-projects-often-fail-because-they-dont-set-the-right-objectives

[2] https://analyticsindiamag.com/gpu-bottleneck-issue-deep-learning-developers/

[3] https://www.forbes.com/sites/robtoews/2022/08/09/ai-hallucination-what-you-need-to-know/?sh=6130743d6626

[4] https://towardsdatascience.com/staleness-in-machine-learning-199c3ec20064

[5] https://bdtechtalks.com/2021/08/31/AI-algorithms-limitations-data/

[6] https://www.intel.com/content/www/us/en/artificial-intelligence/posts/explainability-in-ai.html

[7] https://timdettmers.com/2022/09/03/gpt-token-limits/

[8] https://lilianweng.github.io/posts/2018-06-24-attention/

[9] https://www.forbes.com/sites/robtoews/2020/06/17/deep-learnings-sensitivity-to-training-data-creates-challenges/

[10] https://www.forbes.com/sites/robtoews/2020/05/18/deepfakes-are-going-to-wreak-havoc-on-society-we-are-not-prepared/?sh=229d97b47494

[11] https://www.brookings.edu/research/deepfakes-and-synthetic-media-in-financial-markets/

[12] https://www.eff.org/deeplinks/2022/08/generative-ai-will-soon-raise-all-sorts-copyright-issues

[13] https://www.forbes.com/sites/robtoews/2020/12/13/ai-ethics-and-data-privacy-are-increasingly-intertwined/?sh=d722b9f464c5

[14] https://www.frontiersin.org/articles/10.3389/fdata.2022.963096/full

[15] https://www.schneier.com/blog/archives/2020/01/ai_and_its_thre.html

[16] https://www.psypost.org/2022/09/generative-ai-might-harm-mental-health-according-to-technology-experts-63909

[17] https://www.infoq.com/articles/ai-accountability-transparency/

[18] https://www.ibm.com/blogs/policy/transparency-trust-ai/

[19] https://www.nytimes.com/2021/11/11/technology/artificial-intelligence-bias.html

[20] https://www.forbes.com/sites/robtoews/2020/12/13/ai-ethics-and-data-privacy-are-increasingly-intertwined/?sh=d722b9f464c5

[21] https://www.cmswire.com/digital-marketing/customer-data-security-a-guide-for-marketers/

[22] https://www.varonis.com/blog/data-privacy-policy

[23] https://digitalguardian.com/blog/importance-employee-data-privacy-training

[24] https://www.forbes.com/sites/robtoews/2022/09/04/ai-ethics-ip-law-and-generated-content--what-now/?sh=340db0f05ac1

[25] https://www.technologyreview.com/2018/10/24/139313/how-to-bring-more-human-wisdom-into-ai-systems/




11 views0 comments

Comments


bottom of page