Generative AI is evolving. Knowledge-based applications such as AI chatbots and co-pilots give way to autonomous agents that can reason and execute complex multi-step workflows. These are powered by what is called agent AI. The latest developments in AI are poised to change the way businesses operate by being able to understand context, set goals and adapt actions based on changing conditions.
With these capabilities, agent AI could perform a range of tasks previously thought impossible for a machine to handle – such as identifying sales targets and making pitches, analyzing and optimizing supply chains, or acting as personal assistants to manage employees’ time.
AmazonThe latest collaboration with Adept, an agent AI specialist, signals a growing recognition of the systems’ potential to automate diverse, complex use cases across business functions. But to fully leverage this technology, organizations must first address several underlying data challenges—including latency issues, data silos, and inconsistent data.
Rahul Pradhan, VP Product and Strategy, Couchbase.
The three foundations of agent AI
For its complex functions to function successfully, agent AI needs three core components: a plan to work from, large language models (LLM)and access to robust memory.
A plan allows the agent to perform complex, multi-step tasks. For example, handling a customer complaint may involve a predefined plan to verify identity, collect details, provide resolution, and confirm resolution.
To follow this plan, an AI agent can use multiple LLMs to break down problems and perform subtasks. In connection with customer services, the agent could hire an LLM to summarize the current conversation with the customer and create a working memory that the agent can refer to. A second LLM can then plan the next steps, and a third can evaluate the quality of these steps. A fourth LLM can then generate the final response that the user sees and inform them of potential solutions to their problem.
And just like humans, agentic AI systems cannot make informed decisions without using memory. Imagine a healthcare assistant AI with access to a patient’s medical history, medical recordsand previous consultations. Remembering and drawing from this data allows AI to provide personalized and accurate information, explain to a patient why a treatment was adjusted or remind them of test results and doctor’s notes.
Both short-term and long-term memory are needed for tasks that require immediate attention, and to build an understanding of context that AI can rely on for future inferences. But here lies one of the biggest obstacles to optimizing agent AI today: often, the enterprise databases are not advanced enough to support these memory systems, limiting AI’s potential to deliver accurate and personalized insights.
The data architecture needed to support AI agents
The dominant approach to meeting memory system requirements is the use of dedicated special purpose database management system for various data workflows. But the practice of using a complex web of these standalone databases can hurt an AI’s performance in a number of ways.
Latency issues arise when each of the different databases used have different response times, causing delays that can disrupt AI operations. Additionally, data silos, where information is isolated in separate databases, prevent the AI from having a unified view and prevent comprehensive analysis, causing the agent to miss connections and produce incomplete results. And at a more fundamental level, inconsistent data – due to variations in quality, formatting or accuracy – can also cause errors and skew analysis, leading to faulty decision-making. The use of multiple one-sided database solutions also creates data sprawl, complexity and risk, making it difficult to trace the source of AI hallucinations and troubleshoot errant variables.
Many databases are also not well suited to the speed and scalability required by AI systems. Their limitations become more pronounced in multi-agent environments, where fast access to large amounts of data (e.g. through LLM) is essential. In fact, only 25% of companies have high-performance databases that can handle unstructured data at high speed, and only 31% have consolidated their database architecture into a unified model. These databases will struggle to meet GenAI’s demands, let alone support any kind of unlimited AI growth.
As GenAI develops and agent AI becomes more widespread, unified data platforms will become central to any successful AI implementation by organizations. Updated data architectures deliver benefits by reducing latency with advanced technology, efficiently managing structured and unstructured data, streamlining access, and scaling on demand. This will be a key development in building coherent, interoperable and resilient memory infrastructures and allow enterprises to finally benefit from the automation, precision and adaptability that agent AI has to offer.
Embrace the AI revolution
Agentic AI opens the door to a new era where AI agents act as collaborators and innovators, fundamentally changing the way people interact with technology. Once businesses overcome the challenges of disparate data sources and optimized memory systems, they will unlock widespread adoption of tools that can think and learn like humans, with unprecedented levels of efficiency, insight and automation.
We have featured the best SQL courses online.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the tech industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, read more here: