Powerful artificial intelligence (AI) tools like Large Language Models (LLMs) have been making headlines as companies consider how the technology can be used to drive more efficient operations and better customer experiences. One very promising LLM use case is in product experience management. But as companies accelerate to integrate LLMs into their business applications, there are some risks that should be considered along the way.
Benefits of using LLMs to enrich the product experience
With more of the shopping experience occurring online, delivering a seamless and engaging digital experience has become table stakes. Product experience management helps ensure users have access to detailed and accurate product information, along with rich content and imagery, that communicates a product’s value, benefits, footprint and more.
Delivering better product experiences can help generate future revenue while also mitigating the revenue loss and logistic headaches associated with product returns by ensuring consumers have all the information needed to make the right product choice.
By using LLMs as part of a master data management (MDM) prompt library, companies can create rich, compelling product descriptions in a faster and more efficient way for various segments and channels.
In this LLM use case, it’s important that the prompt library is part of the foundational MDM design, which integrates advanced security measures, flexible prompt crafting, and thorough content review processes. Without this integrated approach, companies open themselves up to the risks of prompt injections, insecure output handling and other vulnerabilities.
Potential risks of using LLMs and how to safeguard against them
The Open Worldwide Application Security Project (OWASP) recently released its list of top 10 vulnerabilities for LLMs, which include prompt injections; insecure output handling; training data poisoning; denial of service; supply chain; permission issues; data leakage; excessive agency; overreliance; and insecure plugins.
These vulnerabilities have the potential to introduce several risks to your business, ranging from security issues, system failures and service issues to compromised data integrity from the generation of misinformation, inappropriate or biased content.
Being aware of the risks of LLMs gives you the opportunity to turn those vulnerabilities into strengths by ensuring your AI initiatives are governed and grounded in strong security principles.
Let’s look at a few of these vulnerabilities – in the context of product experience management – and how using an MDM prompt library can help mitigate the risks:
- Prompt injections: When using LLMs to enrich the data used in product descriptions, indirect or direct prompt injections could result in product descriptions with malicious content, or poor product descriptions that negatively impact conversion rates. For product descriptions created in bulk, this can be a costly mistake, as it requires a manual clean-up effort to remedy.
An MDM prompt library mitigates these risks, as the prompts are crafted in advance by prompt engineers. This ensures built-in functions that check for malicious content in prompts, effectively preventing prompt injection. - Insecure output handling: This vulnerability can occur when an LLM output is accepted by downstream systems without any review or interaction to confirm the content is factual and meets the objectives.
Adhering to the principles of Responsible AI and building human-centric AI systems can eliminate the creation of poor-quality content or security concerns. With this approach, product descriptions created using an MDM prompt library are then passed on as suggestions to content reviewers, who approve the content for usage. Although the content is AI-generated, it incorporates a “human-in-the-loop” protocol.
Be sure to use an MDM with advanced and configurable workflows, which make it possible for the prompt library capabilities to be embedded into any content authoring process across the business. - Training data poisoning: In a product experience management scenario, relying on the foundation models to generate product descriptions implies a direct dependency on any bias introduced into the models when they were trained.
Businesses have no influence on how models are trained, which makes it even more important that the use of models is done with careful consideration of any bias that the models may have. Bias takes many forms, such as selection bias, confirmation bias, measurement bias, and stereotyping bias.
With rich data sets from MDM, models can be fine-tuned to provide a better understanding of the businesses’ product offerings. In clothing manufacturing, for example, information about fabric composition, washing instructions, etc., could be relevant to use when fine-tuning a model to provide better product descriptions.
An MDM prompt library that is built to include a human-in-the-loop protocol guarantees that the outputs are validated and scrutinized for bias. Additionally, using the data sets from the entire product data set in the MDM for fine-tuning an LLM will further minimize the risk of bias. The master data governance processes allow users to enrich data to the highest or desired quality standard, which in return can be fed into the LLM, as part of the training or grounding process. - Sensitive information disclosure: LLMs may inadvertently reveal confidential data in its responses, leading to unauthorized data access, privacy violations and security breaches. It’s crucial to implement data sanitization and strict user policies to mitigate this. Through strong data governance policies and the possibility to encrypt API keys, and through careful design of data access through user role and responsibility setup, these risks are effectively mitigated.
While these are just a few examples, following are some other recommendations to consider when using LLMs to create lasting consumer engagement through creative and crafted product experience creation:
- Use a private LLM (i.e., Open AI on Microsoft Azure) to ensure data privacy
- Create alerts in case of service unavailability or long response times
- Configure different services in different regions as a fallback
- Use an open platform MDM that enables connections to LLMs via REST API
By embedding the principles of Responsible AI and human-in-the-loop functionalities, the MDM prompt library enhances the quality and relevance of AI-generated content. It ensures that the output not only meets but exceeds the highest standards of quality and safety, all while boosting operational efficiency and revenue opportunities and safeguarding against AI-related threats.
Look for an MDM partner that’s committed to data integrity and will make your company’s security, efficiency, and innovation a priority, turning the use of LLMs into a strategic advantage while carefully observing and mitigating the risks associated herewith.