To GenAI or Not To GenAI

Generative AI is new kid in the block, and it is drawing loads of attention. Every business has been talking and thinking about how it can leverage Generative AI in their business. Quick to implement business cases such as summarizing a document, generating common content or enabling natural language search to produce results have been experimented or implemented by many organizations.

While enterprises are yet to adapt it in full strength, leaders such as Microsoft, Google and Salesforce have already started releasing offerings such as M365 Copilot, Google Duet AI and Salesforce Einstein Copilot. Many other leaders are planning to either build their own or use one built by leaders to enable Generative AI capabilities in their products. While the focused groups have moved ahead in their AI strategy – it is just a small portion and vast majority of the enterprises are still trying to make sense of this new trend and understand what it means for their business. Beyond curiosity play, the Generative AI use cases also need to follow a structured approach and follow the suite of mainstream implementations.

Following diagram outlines the approach:

Like many other technologies in their nascent stage, Generative AI adoption is being driven by technology curiosity. Some of the AI assisted activities such as writing email or summarizing a meeting surely are promising, however, there is need to tie Generative AI initiatives to tangible business outcomes. If we use it for primitive purposes such as summing up revenue details or getting break-up of revenue it does not add much value to business other than the transactional wow factor, those are the activities available in voice enabled descriptive reporting today.

Enterprises need to look at Generative AI through a strategic, business, technology and cost perspectives. Some of the business strategic perspectives as below:

  • AI implementations should be backed by a strong business case that ensures measurable outcomes. For example, generating business email should either result in better sales or enable someone with poor written communication skills to sell better.
  • One needs to take a closer look at the organizational implications of using Generative AI. These implications might be related to hiring and grooming skilled employees, having a separate team to moderate AI outcomes, or simply selecting partners for research and implementation.
  • The responsible use of AI is one of the key concerns for enterprises. It ensures that AI capabilities are used in an ethical manner and for the right reasons. Ensuring compliance with laws and respecting the rights of end users are equally important considerations for businesses.

The technology strategy should encompass long-term aspirations that are aligned with rest of the enterprise objectives.

  • Buy vs Build is an important factor that enterprises need to delve into for fulfilling their business strategy. The whole GPT concept is about reusing existing language models as much as possible. But the journey starts even before that – if an enterprise is inclined towards a certain vendor or technology stack, it is natural to evaluate offerings from the same vendor or solution provider. The general advice of building for differentiation holds good for GenAI as well. The narrow AI (which is more specific to a problem statement or function and hence considered narrowly defined) offerings are also emerging to push the reuse of content further up.
  • Robust security and privacy are essential for GenAI, just like any other solution. Based on the buy vs build strategy, one needs to build a security strategy across users, systems, hosting, models, and data, at the very least. Some complexities, such as “show me how I am performing compared to my colleague XYZ?”, should ideally not display any results to safeguard privacy. Scenarios like these and many more should be considered while solving problems. It should include prompt engineering, LLM considerations, and LangChain usage, at the very least.
  • Outcome of Generative AI implementation needs to be trustworthy. It means input, output, solution and models require constant monitoring and tuning. Training GPT could be cumbersome activity, it requires powerful hardware and skilled people for perfecting algorithms and input variables. One needs to be constantly vigilant of drift in data and models at the least.

Unmonitored cost is bad for business, two primary sources of cost are business and technology. Revenue for a profitable business case should be higher than cost of doing business and cost of technology that is enabling business. The technology costs may spiral out of control if following are not taken into consideration:

  • Selection of the right model is the key to producing business-relevant output. Language models, image models, and embedding models have different prices. There is also a cost difference between GPT 3.5 and GPT 4.0. One needs to take a hard look at aspirational outcomes versus actual needs for better cost control.
  • The quality of AI outcomes is determined by the models, algorithms, and the data used to train the models. Once the model is trained and presumably able to produce an acceptable level of satisfaction, the relevance of the model’s output is determined by the data source to which the model refers. The process of making the right data available as a source to the model is called data grounding. The grounding strategy would vary based on output expectations and business scenarios. Seemingly similar business scenarios may still need a different data grounding approach to ensure that the outputs are not mixed up.
  • Prompt engineering is commonly used in Generative AI implementations. Prompt engineering is complex and not necessarily easy, as it is similar to training someone in your preferred language. In the case of Generative AI, a prompt involves asking questions to get the desired response. A common billing approach is tokens-based billing, where approximately 4 characters represent a token. Engineering involves guiding the user to ask the right questions and also avoiding asking unwanted questions. Unmonitored, the tendency of end users to keep asking questions until they get the right answer could be a classic recipe for cost overrun.

In conventional model cost of doing business include technology cost, I am separating it out for simplicity and to associate with business centric activities:

  • Once the solution is rolled out, it needs to be operationalized. The Generative AI solution, like any other automation initiative, can produce value only if it is available to the end user as needed and produces the expected outcome. This means a dedicated team of specialized business professionals would be required for operational efficiency to ensure functional accuracy. This specialized team needs to have access to the right set of tools and possess the necessary functional knowledge to guide users and discover new possibilities that could benefit the organization.
  • It is one thing to develop something, and it is altogether different to drive adoption. Generative AI outcomes are accompanied by fine print that reads ‘output may not be accurate’. What it essentially means is that the output requires validation by someone – an individual or a group. At times, the output might be presented with a high confidence rating, but it might be splendidly incorrect. This phenomenon is referred to as hallucination. Yes, AI can hallucinate and lead users in the wrong direction. End users expecting near 100% assistance would surely be disappointed with such a scenario. Some of these scenarios require strong change management considerations when it comes to usage by business users. Though change management is equally important for technology changes, it needs to have a business-first approach as technology remediation is not straightforward when it comes to correcting such anomalies in Generative AI.
  • Generative AI has landed in the mainstream usage market, though the majority of AI areas are still in research. Generative AI is also evolving; however, with growing data volume and digital literacy in the world, we have entered an interesting era where AI evolution will be much faster than in the past due to increasing digital literacy and the availability of data in abundance. Even with all the hype around, this area is still niche and requires enterprises to set aside a budget for the research and development of internal capabilities. One thing is for sure, GPT has definitely switched focus towards reuse more compared to anything in the past.

The thoughts expressed here are intended to trigger further thinking or possibly be used as a framework for mapping your own enterprise needs to structure outcome-based thinking. The scenarios captured here are not complete, but they are some of the most common scenarios I came across while discussing with my peers in the industry. Please leave a comment or provide feedback if you have more ideas or additional thoughts based on your experience.

Leave a comment