PlanYear AI Newsletter - October 2024
Welcome to the PlanYear AI Newsletter for October 2024! The goal: to help you understand artificial intelligence within the context of employee benefits. Each issue, we’ll provide articles, case studies, and insights about what's going on in AI for Employee Benefits (EB).
In this issue: Document Automation AI: Challenges in Training Custom Models
AI is presenting a major opportunity for employee benefit teams. For the first time in history - there’s a tangible solution to the problem that has plagued every firm: manual data entry. AI can transform documents from carriers into client ready formats with almost zero effort from humans.
Given the massive opportunity in brokerage, it isn't a matter of if firms will invest in AI, but rather how firms will invest in it. And, a crucial question companies always face with new technology is whether to build or buy. If names such as “Apple Intelligence”, or “Galaxy AI” ring a bell, you’ve probably seen more than your fair share of AI marketing. After the recent tech lulls, sales teams from some of these largest companies (Google, Microsoft, ect) have been eager to push these new solutions. And, to widen their market, they are pitching build-your-own solutions directly to companies, including brokerage firms. But what would it actually entail to build your own solution to the data problem in brokerage?
In this month’s newsletter, I’ve tried to break down the complexities of automating manual data entry with AI in brokerage. And more importantly, what it would entail if you were to try to build a system of AI tools yourself.
If you were attempting to automate any part of quote ingestion from carriers, your only hope previously was that the data came in a standardized format. Machine learning, for example, was great at taking data with a specific, consistent format, and translating that data into structured formats. In other words, all carriers would have to use the same format for all quotes and they could never change that formatting over time.
With large language models, however, everything changed. All of a sudden, Artificial Intelligence had the ability to read unstructured data. The carriers could still send their proposals in whatever format they liked, and large language models could process it accurately
That being said, large language models require a lot of training, tuning and upkeep. One can not simply just upload a document and expect an LLM to output the right data in the right format. There is a ton of work that goes into engineering and maintaining the right structure and prompts to support such a complex workflow.
Here are some of the challenges with AI and document automation:
AI models can degrade in quality over time following updates and releases - For example, Claude 3.5 was recently reported to have suffered quality degradation after an update. In fact, It was noted that earlier versions of the model, available through cloud providers like Azure AI and AWS Bedrock, did not exhibit the same level of degradation. This suggests that the issue might be specific to the updated model hosted by Anthropic. In the past, ChatGPT has faced similar issues.
Takeaway - When a model’s performance declines, you need to quickly replace it with a more reliable model, adjust all of the prompts, and in some cases backfill with human intervention to ensure consistent and high-quality results. Monitoring, testing, and manually intervening when needed all require considerable resources.
Niche scenarios like employee benefits require fine-tuned models - general-purpose AI models available on the web aren’t nuanced enough to consistently recognize the varying formats of data and language used by carriers. More specifically, consumer models like ChatGPT don’t allow granular control over crucial AI output parameters such as temperature, Top-K, and Top-P. (I had to look this up, and essentially this means: ChatGPT doesn't let regular users fiddle with special settings that control how creative or focused the AI's answers are. It's like not being able to adjust the temperature on your oven - you can't simply turn a dial to make the AI more creative or more predictable and focused to get different kinds of responses. This limits the ability to fine-tune responses to suit specific use cases.
Takeaway - For specialized domains like employee benefits, custom-built and fine-tuned AI models are essential. While general-purpose AI can handle a wide range of tasks, niche applications often require precise control over model parameters and domain-specific training to deliver accurate and reliable results. Businesses with complex use cases should invest in tailored AI solutions for critical, industry-specific tasks to ensure optimal performance and accuracy.
Models are only as good as the data they’re trained on - The accuracy and reliability of AI models depend significantly on the quality of the training data. High-quality data leads to more reliable and robust models, while poor-quality data can result in models that make inaccurate predictions or exhibit biased behavior.
Takeaway - For a model to effectively solve the data issue in brokerage, it needs to be trained on thousands of documents. Daily. From every state and carrier, and in every possible format. And, if the carriers change their formats, you have to start over. This sounds daunting because, well, it is.
Building an in-house AI solution for benefits brokerage isn't just a matter of plugging in some pre-made models and calling it a day. As we've seen, the challenges run deep - from model degradation to the need for specialized training data.
Sure, the allure of a custom-built tool is strong. But let's be honest: most brokerages aren't sitting on huge teams of AI engineers or a diverse library of carrier documents to fine-tune models daily.
The data problem in brokerage is massive, and while AI holds promise, it's not a cure all. Firms diving into this need to be prepared for a long-haul commitment – constant monitoring, retraining, and adapting to an ever-shifting landscape of carrier formats and AI capabilities.
So before you jump on the "build-your-own-AI" bandwagon, take a hard look at your resources and long-term strategy. There's an inherent trade off between hyper customization and the efficiencies of standardization, however there's a clear upside in investing in solutions who have the scale of training data and engineering resources required to build - and maybe even more critically - maintain the accuracy and efficiency of these models over time.
Thanks for reading - and stay tuned for the next issue of the PlanYear AI Newsletter!
Want to learn more about the PlanYear Benefits Platform? Contact us now to learn how you can quickly modernize the employee benefits experience with PlanYear.
Want to be notified when new editions of the PlanYear AI Newsletter are published? Subscribe now: |
|
Posted by Nick Kostovny
Nick Kostovny is a dynamic and innovative business development & marketing professional in the employee benefits technology space. With a background spanning some of the highest growth companies in the US such as Carta and AllBirds, Nick brings a fresh and unique perspective to employee benefits. Outside of work, you'll find Nick playing the cello, kayaking, skiing, and cooking overly-ambitious recipes.
LinkedIn