top of page
Writer's pictureLuke Hoffman

Why is Everyone Concerned about GenAI?

An Introduction to the Concerns Regarding Large Language Models


 

Artificial Intelligence and Machine Learning (AI/ML) has taken the world by storm. From general applications like linear regression to the more advanced use cases of Large Language Models (LLMs), AI has inserted itself as a mainstay in our culture. With such big change rightfully comes concern regarding how this will affect our lives and society as a whole. Over the next few weeks, I aim to use this platform to discuss some of these concerns as they relate specifically to LLMs/Generative AI and how Cellaware has positioned itself to address these concerns. In this post, however, I would like to go over what the concerns are and why they are a concern.

 

If you search the web, you will find many supposed issues or concerns with Generative AI. Articles with titles like "Generative AI Ethics: Concerns and Solutions" or "9 Problems with Generative AI" you would think that every use case of Generative AI is nefarious and meant to cause the world harm. And while I think there are areas where genuine concern should be, in reality these tools can ultimately be used for the betterment of business and society if used in the right context. Specifically, the context on how business can use this technology to help their workforce be more productive and even, in some cases, smarter.

 

Through my experience in working in the AI/ML space, specifically in business context and after reading many books and articles on the subject, I have been able to reduce most of the concerns for Generative AI for business into 4 major Categories:

 

  1. Copyright and data ownership.

  2. Regulatory compliance and appropriate use.

  3. Data and model training transparency.

  4. Hallucinations, Bad Behavior, and Inaccuracies

 

Copyright and Data Ownership

 

This concern arises from LLMs insatiable need for data. It is no secret that many of the Major LLM companies have received flack for the various sources in which their training material is derived. In a business context, a major concern is the use of proprietary data being used in the context of Large Language Models. What would happen if a Large Language Model got ahold of your companies secret sauce and made it available to a broader audience, even your competitors.

 

Hallucinations, Bad Behavior, and Inaccuracies

 

It has been widely publicized that Large Language Models can make stuff up. Things as seemingly innocuous as using the wrong name even after it was specified in the prompt to something as life altering of citing a made-up Washington Post Article accusing an actual professor of Sexual Misconduct, Hallucinations can be a major issue when it comes to Large Language Models. 

 

Regulatory Compliance and Appropriate Use

 

With the speed that Large Language Models and Generative AI have taken the scene most, if not all, regulators have been playing catch up. From ensuring their constituents are not harmed or their privacy isn't effected to ensuring that the methods that LLM/GAI companies use to train their models are done in a way that doesn't marginalize specific people groups and are sustainable in their use of natural resources and electricity.

 

 Data and model training transparency

 

Many of the larger model creators in the LLM/GAI space have made the decision to close source their models and training methods. While this has different implications for different purposes, there are valid concerns that the lack of comprehensive transparency not only gives rise to apprehensions regarding potential data theft or misuse but also complicates the evaluation of the generative AI model’s output quality and accuracy, as well as the underlying references. Although companies like OpenAI are striving to enhance transparency in their training processes, there remains significant ambiguity regarding the types of data employed and the manner in which they contribute to training generative AI models.

 

 

Over the next few weeks, I aim to address each of these major categories, address how they are being tackled from the LLM Vendors and specifically how Cellaware Technologies has built their products to ensure that these risks are mitigated, and in many cases are eliminated…. Stay Tuned

 

19 views0 comments

Recent Posts

See All

Comentários


bottom of page