Skip to main content
Business transformation

A business-first strategy will get the most out of generative AI

Article Jan 10, 2024 Read time: min
By Edd Pineda

Generative AI is an emerging, game-changing subset of AI, where machines can produce output like code blocks, text, audio, photo-realistic images, artwork and music.

Systems and algorithms identify patterns and structures within the data used to train them and can generate something new in response to descriptive instructions or prompts.

Generative AI has the potential to significantly revolutionize business and industry, education, the arts, and culture in ways we can’t yet imagine, especially if coupled with quantum computing. That potential is a bit down the road, but many industries and markets are looking to get the most out of generative AI right now.

Enterprise spending on generative AI solutions is forecasted to reach US$143 billion by 2027.1

Any organization committing resources to generative AI must recognize the business, legal and ethical concerns associated with the technology. This would include updating the data governance strategy, roles and processes. The data literacy program also would need to be enhanced to account for the related education requirements.

Enterprise spending on generative AI solutions is forecasted to reach US$143 billion by 2027.1
Top concerns about generative AI

Data privacy. AI training models are likely to include customer or employee information that’s protected by personal data privacy laws and regulations in many countries. Misusing personally identifiable information or sensitive personal information could result in steep penalties and legal action and reputational harm.

Attribution and data ownership. Data for training may be available, but it isn’t always clear if that data is protected by copyright or who has the authority to permit its use as input for generative AI.

Intellectual property. Intellectual property and intellectual capital are valuable business assets. Using them in data training models or prompts could unintentionally expose proprietary information to outside audiences.

Transparency and explain-ability. Transparency is the ability to understand the workings of an AI model and how it reaches its decisions. Because generative AI uses complex deep learning models and neural networks to identify the patterns and structures within existing data to generate new content, it can be difficult to explain how and why a given output was derived

Data bias. Source data that incorporates unintended bias or obsolete stereotypes about protected groups means that generative AI would reflect similar biases in the output. Biased output skews decision-making and has an impact on corporate reputation.

Hallucinations. Generative AI has been known to provide definitive responses that are not justified by the training data and make things up when it doesn’t otherwise have a response. These inaccuracies—sometimes called “hallucinations”—put the reliability of generative AI output in doubt.

Disinformation. Malicious actors can use generative AI to publish false narratives that deliberately manipulate or deceive audiences, and sophisticated tools can create hyper-realistic “deepfake” images, videos and audio.

Regulation. As time passes, more legislative and governing bodies are looking at regulatory limits for AI. The EU has aligned on a political agreement putting restrictions on what are perceived to be AI’s riskiest uses. In the US, policymakers have put in place guidelines focused on several aspects of AI policy, especially safety, while directing federal agencies to pursue regulations.

Sustainability. Generative AI and large language models require substantial computing power and energy consumption, contributing to carbon emissions.

A well-defined business case is the right place to start with generative AI.
How to get the most out of generative AI

Given the early maturity level of generative AI, business leaders should ignore the buzz and pay attention to what they want to achieve for their organizations. A measured approach would focus on goals, protect and responsibly use data, and ensure trusted and reliable output.

Lead with goals.

Instead of letting the technology determine what you do with it, be disciplined in examining if generative AI is even the right solution for the objective or problem at hand. A well-defined business case is the right place to start.

Clearly documenting the business problem and expected outcomes is essential for evaluating the potential gains, risks and scale of investment—and gaining stakeholder commitment. The business case gives development teams details for defining technical requirements, understanding what data is needed and tailoring a solution that syncs with your organization’s resources.

Protect data.

There’s growing confidence that generative AI can create amazing things with the content it’s trained on, but it’s too soon to leave it completely on its own in enterprise applications.

Key to protecting data is having humans in the loop to avoid the misuse of sensitive or biased input data and monitor for distorted outputs.  

Shielding sensitive personal customer or employee information, intellectual property and intellectual capital should be a paramount concern. Cloud providers and other vendors are improving safeguards in this space, but public large language models may offer limited protection for sensitive or confidential information.

Ensure reliable output.

Confirmation transformers and ongoing testing can verify that generated content doesn’t contain any inaccurate or discriminatory elements that might cause harm.

This transparency improves explain-ability and fosters confidence in both the process and the finished product. Unless the objective is pure research, there will be an expectation for any generative AI initiative to justify its cost. Organizations are wise to take steps to avoid putting more into a project than they get out of it.

Stay focused.

While it’s feasible to stand up proof-of-concept prototypes with an initial limited budget, rolling out a production solution may not scale well over time. Smaller, specialized models tailored to specific business cases may yield better results and require a smaller funding commitment for ongoing maintenance.

Keep your guard up.

A responsible AI framework should include privacy and security principles.

Start with strong data.

Costs can balloon quickly, especially if overall organizational data maturity is low. Investing in foundational data governance, quality and observation strategies as a precursor to generative AI business strategy development will optimize the performance, security and ROI of any initiative.

Edd Pineda is U.S. Head Data Scientist at Kyndryl