PlanYear AI Newsletter - August 2024

PlanYear AI Newsletter-1

Welcome to the PlanYear AI Newsletter for August 2024! The goal: to help you understand artificial intelligence within the context of employee benefits.  Each issue, we’ll provide articles, case studies, and insights about what's going on in AI for Employee Benefits (EB). 

In this issue: AI Hallucinations, Synthetic Data, and Risks.

Artificial Intelligence (AI) continues to make significant strides, from breakthroughs in natural language processing to advancements in autonomous systems. In May, OpenAI demoed a new voice model that may revolutionize how we use technology. While these developments are exciting, they also bring to light an intriguing phenomenon: "hallucinations" within Large Language Models (LLMs).

As leaders look to implement AI into their workflows, understanding the implications of these hallucinations is becoming crucial. In this month's newsletter, we’ll explore what AI hallucinations are, discuss current mitigation strategies, and examine the role of synthetic data in addressing these challenges.

The Reality of AI Hallucinations

AI hallucinations occur when LLMs generate false or misleading information. This can happen due to insufficient training data, biases in the data, or the model's inability to accurately understand real-world knowledge. The consequences can be significant, especially in industries where accuracy is paramount, such as healthcare and law.

For instance, a healthcare AI model might incorrectly identify a benign skin lesion as malignant, leading to unnecessary medical interventions. Similarly, a recent case involving a New York lawyer demonstrates how AI models can be misused, intentionally or unintentionally. In this case, a lawyer used ChatGPT to generate fake legal citations, resulting in sanctions and fines.

It's important to note that AI models remain probabilistic, meaning there's always an element of chance in their outputs. This inherent randomness ensures that hallucinations will never be completely eradicated. These models predict the most likely word to follow a given sequence based on patterns learned from vast datasets, but this process can sometimes lead to unexpected or inaccurate results due to the complexity and variability of human language.

Mitigating AI Hallucinations

As models evolve, so too must the techniques used to increase their accuracy. Here are some strategies currently being employed to mitigate AI hallucinations:

  1. Improve Data Quality and Diversity: Ensuring AI models are trained on diverse, balanced, and well-structured datasets can help minimize biases and reduce the likelihood of hallucinations.
  2. Implement Human Oversight: Humans remain a vital piece of the AI puzzle. Involving human reviewers to validate and review AI outputs is crucial for identifying and correcting any hallucinated content.
  3. Cross-reference with Reliable Sources: AI should be treated as a tool, not an infallible oracle. Encouraging users to cross-check AI-generated content with reliable sources and comparing outputs from multiple AI platforms can provide a better understanding of the quality and reliability of the results.
  4. Advanced Prompting Techniques: Methods like chain-of-thought prompting have shown promise in increasing the accuracy of chatbot outputs by breaking responses down step by step.

The Role of Synthetic Data

During our research on the challenges of AI hallucinations, the term “synthetic data” continued to pop up. It is likely that synthetic data will continue to increase in importance as we grow more reliant on models for our day to day work. But what exactly is synthetic data, and why is it important?

Synthetic data is artificially generated information that mimics the characteristics and patterns of real-world data. Created using algorithms, generative models, or simulations, it can be used as a substitute for real data in various applications, including the development of LLMs.

Synthetic data can be used to augment real-world datasets, providing a broader range of scenarios and examples. This helps in covering potential edge cases that might not be present in the original data, thereby reducing the chances of AI hallucinations 

To put it in simpler terms, synthetic data is like a computer-generated simulation of real information. It's akin to a video game creating a realistic virtual world, but for data instead of a game environment. This approach allows developers to train AI systems on vast amounts of data without relying on potentially sensitive or private real-world information.

However, like any powerful tool, synthetic data comes with its own set of challenges. One significant concern is the potential for bias inheritance or amplification. If the original datasets used to generate synthetic data contain biases, these can be reflected or even exaggerated in the synthetic output. This ripple effect can lead to skewed model outputs, potentially compromising the AI's ability to generalize effectively to new, unseen data.

Another risk lies in over-reliance on synthetic data. While it can be an invaluable tool for augmenting datasets, exclusively training models on synthetic data without sufficient real-world input can result in a decline in model quality and diversity. This degradation stems from the models learning from data that may not fully capture the intricate complexity and variability of real-world scenarios. Striking the right balance between synthetic and real data is crucial to harness the benefits of synthetic data while mitigating these risks.

As a business leader looking to use AI for your internal teams, here are some questions to ask about the model's integrity:

  • How does the AI model ensure fairness and avoid bias?
  • What are the security measures in place to protect data and AI systems?
  • How transparent is the AI model in its decision-making process?
  • What is the quality and source of the data used to train the AI model?

Conclusion

AI hallucinations represent a significant challenge in the development of reliable and trustworthy AI systems. While advancements in training techniques and the use of synthetic data offer promising solutions, the inherent probabilistic nature of AI models means that hallucinations will always pose some level of risk. It's crucial for businesses to remain aware of these risks and to closely monitor the development of each separate issue. This ongoing vigilance will enable companies to accurately assess the risk-to-reward ratio when implementing these technologies.

Here are a few other interesting trends in the AI space: 

AI-based clinical research firm Paige, working with Microsoft, has introduced two new AI models called Virchow2 and Virchow2G that are set to make a big difference in how we diagnose and treat cancer. Paige has built AI models based on a huge collection of data—over three million pathology slides from more than 800 labs in 45 countries. This data comes from over 225,000 patients worldwide, making it very diverse. 

A panel of filmmakers discussed how AI might change the movie industry. Right now, AI tools aren’t advanced enough to drastically change how films are made, but in the future, they could have a big impact. These tools might make it easier for more people to create movies, but this could also result in a lot of low-quality content.

Google has introduced major updates to its AI assistant, Gemini, making it more powerful and integrated with Android. The new "Gemini Live" feature enables natural, hands-free conversations with AI, allowing users to engage with multiple voice options and deep app integrations like Keep, Tasks, and YouTube Music.

Thanks for reading - and stay tuned for the next issue of the PlanYear AI Newsletter! 

linkedin-share-button-icon

Want to learn more about the PlanYear AI-Powered Benefits Platform? Contact us now to learn how you can quickly modernize the employee benefits experience with PlanYear.

 

 

Want to be notified when new editions of the PlanYear AI Newsletter are published? Subscribe now:

 

 

 

 

 

Posted by Nick Kostovny

Nick Kostovny is a dynamic and innovative business development & marketing professional in the employee benefits technology space. With a background spanning some of the highest growth companies in the US such as Carta and AllBirds, Nick brings a fresh and unique perspective to employee benefits. Outside of work, you'll find Nick playing the cello, kayaking, skiing, and cooking overly-ambitious recipes.

LinkedIn