Generative AI came to the fore in 2023 and much has been learned about how effective it can be in the enterprise – so, what are the challenges facing its deployment?
Generative AI, otherwise known as genAI, is a digital tool that can be trained to produce text, imagery, audio or synthetic data from a series of prompts. The recent interest around generative AI tools has been driven by their ease of use for creating high-quality content in a matter of seconds.
Much of this has been related to the release of ChatGPT by OpenAI in November 2022. Many were surprised with the generative AI’s ability to create different types of text based on a series of text commands from a user.
The UK government recognised the importance of AI in its spring statement for 2023. Jeremy Hunt, the chancellor of the exchequer, unveiled a £500m AI investment fund to boost the UK’s artificial intelligence and quantum computing capabilities. Microsoft also announced it would be investing £2.5bn in the UK’s AI infrastructure and skills over the next three years.
“In the fullness of time, generative AI will affect pretty much every enterprise that we can think of, and ones we have not yet thought of,” says Michael Fertik, founder and managing director of venture capital firm Heroic Ventures.
“Right now, the primary functions of generative AI, that we can see on the immediate horizon, have to do with software development and anything to do with understanding numbers or digesting large amounts of textual data and producing output from that understanding – what I would call continuous audit.”
The uses of generative AI
The potential applications for generative AI are incredibly varied. The current focus seems to be on automating customer support, whereby an appropriately trained generative AI can swiftly identify and resolve customer queries. However, as the Chatbot Summit in London recently demonstrated, potential applications also include healthcare guidance and automated negotiation tactics.
BT is developing a generative AI customer-facing digital assistant, known as Aimee, as the initial point of contact for customer support. The current iteration of Aimee is intended to assess customer queries and identify the most appropriate support staff to assist them. This should improve the efficiency of support staff, while reducing the time customers spend waiting for their problem to be resolved.
Merecedes-Benz is combining its MBUX voice assistant with the large language model of ChatGPT. This has resulted in a voice assistant that accepts more natural voice commands and can conduct entire conversations and contribute ideas.
“By integrating ChatGPT, Mercedes-Benz can further improve its established MBUX voice assistant and to continuously expand it,” says Alexander Schmitt, head of speech technology at Mercedes.
Fertik posits that the entertainment industry could be an ideal platform for generative AI. The UK video game industry is worth over £7bn and already uses content generation algorithms in some of its video games, such as procedural generation to ensure each level of a game is unique.
“I am calling it game generation,” says Fertik. “I believe that everyone will be able to create gaming environments and characters in real time using text visualisation bases, with some kind of cognitive stable diffusion.”
Content creation, such as scripts, is likely to be massively impacted by generative AI. We already use generative AI to a lesser extent, such as the auto-complete functions in some email platforms, which suggest the completion of sentences. In a similar vein, coding could, at least partially, be automated through generative code writing.
Jailbreaking and the challenges of generative AI
One concern surrounding generative AI is the potential for jailbreaking. This happens when the behaviour of an AI is subverted for malicious purposes, such as by creating prompts to violate the content guidelines of the AI model and misuse it. A notable example occurred in 2016, when Microsoft’s AI chatbot Tay on Twitter posted offensive tweets and was shutdown 16 hours after its launch. According to Microsoft, this was caused by certain users who subverted the service as the bot made replies based on its interactions with people on the platform.
The risk posed by jailbreaking means that generative AI needs to be carefully trained with the appropriate guardrails. Clearly defined boundaries during the training of a generative AI will ensure that it does not deviate from established parameters. Utilising a closed sandbox environment to train a generative AI mitigates the risk of unintended consequences, compared to training an AI over the open internet. However, there also needs to be diversity within the training data to avoid being locked into a single cultural mindset.
“It’s important that any vendors we work with – and a lot of them already have these protocols in place – make sure they are constantly prompting and testing the model,” says Kenneth Deeley, senior engineering manager for conversational AI at BT. “In our case, the model is hosted on our own private cloud and will be purely set to what our data is. We’ve only trained it on a roaming URL.”
Another concern surrounding generative AI is that it will automate tasks and replace humans. It cannot be denied that generative AI will have a significant impact upon how we approach work. However, given the limitations of the technology, generative AI will most likely augment human roles, rather than becoming a replacement for people. By automating the repetitive and mundane aspects of a role, it will allow people to focus their time on more complex issues.
Generative AI can be something of a black box, as we do not always fully understand how it arrives at the answers it provides. Training an AI in how it learns and the flexibility it has in generating answers is highly dependent upon the task it is intended for. In legal and technical roles, it will need to have a strict adherence to what it has already learned, while less constrained roles – such as within design – could allow for more flexible approaches.
“We do not understand the feed-forward mechanism. No one actually knows the moment in which the neural network begins to understand something,” says Fertik. “Nobody knows why, and that’s marvellous. That means, at least in some sense and some level of abstraction, we’re looking at a living thing.”
An issue that has arisen through generative AI is its habit of “hallucinating” answers if it does not know the definite answer to a problem. “If generative AI doesn’t know the answer, it tries to work out what the answer could be,” explains Deeley. “We have a responsibility to make sure the information that we’re giving out is accurate. We can’t have best guess answers, we need to make sure that the answers are accurate.”
Generative AI is being trialled within the legal profession as a possible tool to assist in writing legal documents. In one instance, a fee earner, such as a solicitor or paralegal, used generative AI to help write a report for a team of solicitors. However, when the report was reviewed by the solicitors, they could not find a particular piece of case law that had been referenced. When they looked into this further, they realised the generative AI had taken two separate pieces of case law and had created its own.
Another issue that needs to be addressed is controlling bias within the datasets, which are used by developers to train generative AI. Clean data, as in that which is free from bias, gives accurate results. However, ensuring data is an accurate and true representation of information can be challenging. This can be especially true if there is a reliance on historical data, which may be influenced by the cultural perceptions at the time it was recorded.
However, not all organisations want unbiased data sets. Retailers and vendors will naturally prefer generative AI to recommend their own products, rather than that of their competitors. From an unbiased perspective, a competitor’s product may well be better value.
The future of AI
The black box nature of generative AI means there always will be a need for human oversight, rather than blind acceptance of whatever the generative AI comes up with. Generative AI can produce answers to questions and prompts with incredible rapidity, but they may not always be accurate or appropriate.
Despite the concerns regarding jailbreaking and hallucination, generative AI will impact all business roles by automating mundane and repetitive tasks. However, an over-reliance on generative AI could lead to costly mistakes.
Likewise, not everyone will be comfortable using generative AI. In particular, some elderly or neurodivergent customers may struggle to use or respond to AI assistants. It is therefore essential that customer support staff remain available.
The key to a successful deployment of this technology will be to augment existing roles using digital tools, rather than replacing them entirely with generative AI. “AI is going become part of the landscape, like gravity or oxygen,” concludes Fertik. “It’s going to be part of every company.”