After nearly two years of experimentation with generative AI, many IT leaders are ready to scale up. Before they do, however, they need to rethink data management.
According to Kari Briski, VP of AI models, software, and services at Nvidia, successfully implementing gen AI hinges on effective data management and evaluating how different models work together to serve a specific use case. While a few elite organizations like Nvidia use gen AI for things like designing new chips, most have settled on less sophisticated use cases that employ simpler models, and can focus on achieving excellence in data management.
And Doug Shannon, automation and AI practitioner, and Gartner peer community ambassador, says the vast majority of enterprises are now focused on two categories of use cases that are most likely to deliver positive ROI. One being knowledge management (KM), consisting of collecting enterprise information, categorizing it, and feeding it to a model that allows users to query it. And the other is retrieval augmented generation (RAG) models, where pieces of data from a larger source are vectorized to allow users to "talk" to the data. For example, they can take a thousand-page document, have it ingested by the model, and then ask the model questions about it.
"In both of these kinds of use cases, the enterprise relies on its own data, and it costs money to leverage your own information," says Shannon. "Small- and medium-sized companies are at a big advantage compared to large enterprises burdened with legacy processes, tools, applications, and people. We all get in our own way sometimes when we hang on to old habits."