Joe Rose, president at strategic technology provider JBS Dev, is challenging a pervasive myth surrounding the implementation of generative and agentic AI systems. “It’s a common misconception that your data has to be perfect before you engage in any of these types of workloads,” he stated.
While some vendors and consultants might advocate for extensive data lake initiatives and multi-year data transformation programs, Rose suggests that the reality is far more accessible. “The tooling available today to handle less-than-perfect data is unprecedented,” Rose explained. “It’s quite remarkable what a large language model can interpret from even a partially formulated prompt.”
This accessibility presents a significant advantage. Leveraging these powerful tools, with appropriate guardrails, can accelerate AI adoption. However, the inherent unpredictability of AI models necessitates robust mechanisms for managing imperfect outputs, underscoring the crucial role of human oversight. Unlike traditional systems where “we build it, it works, we forget about it,” Rose emphasizes that generative AI requires continuous engagement. “That’s simply not how these systems operate,” he noted.
Illustrating the practical application of this approach, Rose shared an example of a client in the healthcare sector aiming to transition to a new billing reconciliation system. The existing data was fragmented and inconsistent, with records existing in PDF, image formats, and often containing errors like procedure names mistakenly listed under doctor’s names. Generative AI was employed to efficiently identify and extract clean data through simple prompts, handling OCR for images and text extraction for PDFs. Subsequently, more advanced agentic techniques were deployed to compare customer records against insurance contracts, verifying the accuracy of billed rates.
“The strategy involves layering different use cases progressively,” Rose elaborated. “This doesn’t imply immediate perfection; human oversight remains essential. However, the objective is to move from an initial automation level of, say, 20%, to 40%, then 60%, and ultimately 80%, gradually enhancing efficiency over time.”
Looking ahead, Rose anticipates that future discussions around AI models will increasingly focus on cost-efficiency and portability. “I believe we’ll witness a departure from solely emphasizing radical leaps in model capabilities and a greater shift towards finding sustainable cost models that reduce the need for massive data center expansions,” he predicted.
He further elaborated, “The ultimate goal for the ‘last mile’ is to enable these systems to run on devices like laptops or smartphones, rather than being tethered to data centers. These models have been trained on an exhaustive dataset, encompassing virtually the entirety of the internet and beyond. It’s unlikely that vast quantities of untapped data will emerge to drive entirely new breakthroughs.”
Rose is eagerly anticipating discussions at the AI & Big Data Expo, where JBS Dev will be participating. He intends to present a perhaps controversial viewpoint: urging organizations to reconsider purchasing from SaaS vendors when in-house development is feasible. “It’s not as daunting as it might sound,” he asserted. “Most organizations already have a cloud presence, which serves as an ideal starting point. The cloud tooling offered by the major providers, in particular, equips you with everything needed to implement agentic workloads immediately, without incurring new software licenses or extensive training.”
With this foundation in place, JBS Dev positions itself as a partner for organizations navigating the subsequent stages of their AI journey.
Original article, Author: Samuel Thompson. If you wish to reprint this article, please indicate the source:https://aicnbc.com/21641.html