Skip to main content
Business transformation

4 steps to responsibly scale generative AI

Article Feb 21, 2024 Read time: min
By Lakshmi Krishna and Wiem Sabbagh

We increasingly field questions from our customers about how best to incorporate generative AI into their forthcoming budgets. They want guidance not only on integrating this technology into their operations and workplace, but also on how to deploy generative AI tools securely and responsibly.

To address this need, we’ve outlined four strategies to help navigate the complexities of responsibly scaling generative AI. These strategies are tailored for teams seeking both a quick start and the tools to build a sustainable, generative AI ecosystem for future investments.

Strategy 1: Select and prototype high-value use cases

Generative AI is a powerful tool, capable of enhancing various business outcomes—from improving customer and employee experiences to driving operational efficiencies. However, given the initial investments required to make these outcomes a reality, it’s important to select your first use cases strategically.

Consider these key characteristics when deciding where to start:

High value: Identify use cases that drive business outcomes and offer a significant return on investment.

Low risk: Avoid use cases that could introduce unfairness, bias and reputational risks, and potentially harm your brand.

Responsible: Be aware of evolving regulations that increasingly push for greater transparency and keep data-specific regulations in mind.

Feasible: Assess technical feasibility, cost and value, and ensure alignment with your company’s data strategy, culture and objectives.

Once you identify a strong starting point, it’s time to move forward to prototyping to determine technical feasibility and identify the right data foundation for your use case. Through these practical steps, your team can measure the true ROI and potential outcomes of these tools, as well as their shortcomings and costs.

 

Through these practical steps, your team can measure the true ROI and potential outcomes of generative AI tools, as well as their shortcomings and costs.

For example, let’s look at one of our customers: an international airport that wanted to employ generative AI to analyze, categorize and respond to social media feedback to enhance the speed and accuracy of its customer service response.

This application of generative AI now allows the airport’s customer service function to offer real-time, effective support, boosting both customer satisfaction and productivity. 

Strategy 2: Establish a strong data foundation

A robust data architecture is crucial for responsibly scaling generative AI.

This strategy requires readying your data architecture and data to meet the technology’s unique demands, such as compatibility with large language models (LLMs), unstructured data processing capabilities, and enabling semantic search with vector databases for content discovery and updated data models.

Think of generative AI as a car, your data architecture as its engine and your data as its fuel. Just as a car needs a reliable engine and the right type of fuel to perform well, generative AI requires a strong data foundation and clean data to optimally function.

The old adage ‘garbage in, garbage out’ rings true here: poor data quality can quickly increase the risk of model errors and biases. Therefore, ensuring that your data is well governed and clean in the vector database is vital to provide accurate, context-rich and relevant responses.

Ensuring that your data is well governed and clean in the vector database is vital to provide accurate, context-rich and relevant responses.

Effective data governance and security is especially critical in highly regulated industries like banking and healthcare, where quality control and governance present even more significant challenges. As a result, laying a strong data foundation must also include encryptions for proprietary and customer data and robust policies for data access. Achieving this level of protection involves implementing data catalogs, dynamic data classification, and tagging and protection policies that are able to evolve with changing regulations.

Strategy 3: Implement an LLMOps framework

The true value of generative AI emerges when it is operationalized at scale through reliable, trustworthy systems. And that’s where large language model operations (LLMOps) come in.

LLMOps encompass the practices, tools and techniques for managing the whole lifecycle of LLMs—from data management and model development to deployment.

A few of the core features of an effective LLMOps framework include:

Prompt management: Establish a central repository for prompts and templates to enhance reusability and experimentation.

Security: Incorporate security measures—prompt filtering and content moderation, topic blacklisting and whitelisting—while maintaining access to data with security policy packs that are customizable for specific industries.

Observability: A centralized logging and monitoring system that oversees the data and retrieval-augmented generation (RAG) pipelines and the performance of the LLM chains—such as latency or number of requests—helps identify issues faster. Monitoring data and model drift and detecting bias are also critical to overall observability.

Orchestration: It’s vital to have a robust orchestrator that integrates your generative AI solution into applications, provides for testing and scoring multiple models, and allows for training and fine-tuning various LLMs. Having humans in the loop is another critical element of a responsible AI system.

FinOps: There’s an inherent pricing complexity to LLMs. As such, it's important to have transparent and integrated financial operations (FinOps) intelligence that provides visibility into usage tracking, cost attribution and anomaly detection. Your choice of LLMs (proprietary or open source) and the number of tokens based on the use case are also key to determining costs.

Incorporating these functionalities into your LLMOps is crucial for effectively and responsibly scaling generative AI.

To capitalize on generative AI opportunities, teams should focus on upskilling their current workforce, while also attracting specialized talent.
Strategy 4: Build the necessary skillsets and expertise

Successfully implementing generative AI hinges on the right skills. This challenge isn’t exclusive to teams actively scaling generative AI, but rather extends across all sectors as generative AI becomes more and more integrated into various products.

Organizations today need to focus on upskilling their current workforce, while also attracting specialized talent. The nuts and bolts of building generative AI responsibly lie in integral roles, such as data engineering, designing and building data platforms, and backend development. But to deploy these tools, your team will also need to build the necessary expertise in prompt engineering, developing generative AI applications, building regulatory and responsible AI expertise, and researching newer areas within this evolving technology.

Ready, set, scale

As generative AI increasingly becomes a cornerstone in the future of business, it’s imperative to develop a strategy that resonates with your organization and teams. Initiating with high-value and low-risk use cases, building a strong data foundation, adopting a robust LLMOps framework, and fostering a skilled workforce are integral steps in this process.

These strategies equip teams to not only adapt but also thrive in this transformative era—understanding generative AI not merely as a tool, but as an emerging collaborator.

Lakshmi Krishna is Associate Director, Data Science at Kyndryl and Wiem Sabbagh is Director, Global Leader of Data & AI Consult at Kyndryl.