Experience the MicroStrategy difference | Start your 30-day trial today

Header
Hero Background Image

Intelligence Everywhere

The MicroStrategy Blog: Your source for analytics and AI trends, and business intelligence insights.

Trust and Transparency: Gen AI Adoption Checklist

Photo of The MicroStrategy Team

The MicroStrategy Team

August 16, 2024

Share:

Generative AI (GenAI) is transforming how we work and communicate, offering amazing opportunities for productivity and innovation. However, integrating generative AI models into your organization isn’t without its risks. From ensuring reliable outputs and protecting against misinformation to navigating legal and environmental concerns, there are important safeguards to consider.

Forrester Research has analyzed enterprise adoption of Generative AI (GenAI) and developed a set of recommendations in its research report “Generative AI Prompts Productivity, Imagination, and Innovation In The Enterprise.” This checklist is designed to guide you through the essential steps to adopt GenAI effectively and responsibly. Let’s start making the most of GenAI while keeping your organization secure and efficient.

Minimizing risk of generative AI tools

1. Start with modernizing low-risk internal workflows before exposing GenAI into critical processes or customer-facing external communications.

Introducing GenAI into low-risk internal workflows allows organizations to test the waters and understand the technology’s limitations. It allows them to reap potential benefits without impacting critical business operations.

This approach helps mitigate risks such as data breaches, misinformation, or operational disruptions that could damage the organization’s reputation or customer relationships. For example, using GenAI to automate administrative tasks such as email sorting or report generation can provide a controlled environment to assess performance while gathering valuable insights.

2. Explain the hallucinations and non-deterministic nature of current-generation GenAI to your employees. They need to understand why hallucinations occur. This way, they won't be surprised when models give incorrect answers. They also won't be taken aback if the same question leads to different results.

Educating employees about the potential for generative AI technology to produce inaccurate or inconsistent results is critical. Such Gen AI “hallucinations” can lead to confusion or erroneous decision-making if not properly understood. For instance, a financial analyst using a GenAI tool for market predictions should know that the tool’s suggestions might differ with each use, necessitating additional validation.

It is good practice to have all generated content and AI outputs be fact checked, even is they sound plausible.  This transparency ensures employees use the tools effectively and with the appropriate level of critical thinking.

3. Recognize that GenAI filters for inappropriate content are imperfect. There are ways to bypass content safeguards such as “suggesting to the AI that a fictitious film script character intends to do something nefarious”.

Despite advances in AI safety and ethics, filters designed to block inappropriate content are not foolproof. Clever manipulation or prompt engineering can sometimes bypass these safeguards, leading to the generation of harmful or offensive content.

For example, an AI-powered customer service chatbot might inadvertently suggest inappropriate solutions to a user’s problem if safeguards are bypassed. Organizations should implement additional monitoring and fail-safes to minimize such occurrences.

generative-ai-tools-visualization (1).webp

Using trusted vendors instead of 'black box AI'

4. Address trust and reliability through vendors such as MicroStrategy that emphasize transparency and accuracy versus “black box AI”.

Trust in GenAI systems is paramount for their successful integration into business operations. Vendors like MicroStrategy offer solutions that prioritize transparency, providing high quality insights into how AI models make decisions. This method builds trust by helping users and stakeholders understand and confirm the AI's actions.

For example, a healthcare provider using GenAI for patient diagnostics needs assurance that the AI’s recommendations are based on reliable data and methods, ensuring patient safety and compliance with medical standards.

5. Plan to use AI Bots to embed generative AI in your existing applications and “review your vendor’s roadmap carefully for how they will handle, leverage, and allow data relevant to fine-tuning generative AI apps to be shared.”

Embedding GenAI within existing applications via AI bots can enhance functionality and improve user experience. However, organizations must carefully review their vendors’ roadmaps to ensure that training data is handled securely and ethically.

For instance, a retail company incorporating AI bots in their online store should ensure that customer data used to personalize shopping experiences is protected against misuse or unauthorized access.

Inclusion and environmental issues

6. Answer the legal questions of inclusion in training models of copyrighted or trademarked materials.

Using copyrighted or trademarked materials in training an AI system that generates text or images raises significant legal concerns. Organizations need to address these issues to avoid potential lawsuits or reputational damage.

For example, a media company using copyrighted images to train AI image generators must ensure they have the necessary permissions and licenses to do so. Similarly, it must be considered that some of the real world textual content used to train a large language model (LLMs) may be subject to copyright.

This approach not only protects the organization legally but also upholds ethical standards in AI development.

7. Weigh the environmental costs of training and re-retraining models in energy and carbon emissions. Consider purchasing carbon credits, when important for your organization’s environmental mandates.

The environmental impact of training and re-training GenAI models is a growing concern, particularly regarding energy consumption and carbon emissions. Organizations should consider these factors when making AI plans. They can lower their carbon footprint by purchasing carbon credits.

For example, a tech company with a strong commitment to sustainability should calculate the carbon impact of their AI operations. To offset these emissions, it could invest in renewable energy projects, aligning their AI initiatives with their environmental goals.To learn more, download your copy of the Forrester Research report “Generative AI Prompts Productivity, Imagination, and Innovation in the Enterprise”.


Product Updates

Share:

Photo of The MicroStrategy Team
The MicroStrategy Team

We provide powerful software solutions and expert services that empower every individual with actionable intelligence.