THE THREAT OF AI: Is Chat GPT Stealing Your Data?

No other technology has had more press coverage in 2023 than Chat GPT. While AI systems aren’t new, the ability of Large Language Models (LLMs) like Chat GPT to ingest and organize vast amounts of data across multiple industries is game-changing. The first in a four-part series, this post examines some of the pitfalls inherent in today’s AI systems and looks at ways to mitigate the threats. Watch the podcast.

By Logan D. Selby, PhD, DataShapes and Junaid Islam, XQ Message


No other technology has had more press coverage in 2023 than Chat GPT. While AI systems aren’t new, the ability of Large Language Models (LLMs) like Chat GPT to ingest and organize vast amounts of data across multiple industries is game-changing.

Not surprisingly, many public and private organizations want to integrate hosted AI services like Chat GPT into their enterprise, and from a cost perspective, using hosted services like Chat GPT seems to have no downside.

Unfortunately, these systems work by absorbing massive amounts of data from multiple prompts into a centralized database that provides various neural net-based models for the billions of parameters that need to be “trained.” Rarely do users ask whether the data they’re feeding these machines is protected.

And where does all that data go once the model is trained?

These are questions you should ask before feeding your information into online prompts.

Not surprisingly, lawsuits have been filed in federal court against OpenAI (the creator of Chat GPT). The company has been charged with scraping 300 billion data points without authorization.

Fortunately, centralized hosted services like Chat GPT are not the only options available to  organizations who need to analyze large data sets. A better option for enterprises with high-value data is decentralized AI with Zero Trust Data security.

About Decentralized AI and Zero Trust Data

The concept of decentralized AI infers that the data processing occurs at the ingestion point, and that only the insights from the data are transported back to the end user, database, machine, dashboard, or cloud application. Edge computing is an essential component of decentralized AI because it helps protect against large-scale data storage. Having small, lightweight AI software that can operate on these edge devices—away from the centralized “mothership” used by these other processes—is critical. Decentralizing AI inherently mitigates the risk of various data science pitfalls. 

Zero Trust Data is a new security architecture, defined in November 2022 by the U.S. Department of Defense, in which every data object is protected with a unique encryption key. This “per-object” encryption enables forensic-level access control. When integrated into decentralized AI systems, Zero Trust Data ensures that raw data on edge compute systems is protected from exfiltration attacks, and that only authorized software programs or users can access insights. 

DataShapes and XQ Message have partnered to create the first Zero Trust AI solution. DataShapes’ decentralized AI is able to process all data as it is being ingested; XQ Message is able to encrypt any stored data or generated insights.

The Zero Trust AI solution is ideal for regulated industries such as healthcare and financial services where protecting customer data is required. For example, monitoring medical patients at their homes can now be handled safely because cyber attackers cannot access medical sensors or hijack their control systems.  

Learn More



__________________________________________________

Don’t Miss It!

The second installation in this four-part series examines mission critical data for the DoD. For updates, subscribe to the Indiana Nomad YouTube Channel or connect with the Indiana Nomad on LinkedIn.

Previous
Previous

Electronic Warfare: What You Need to Know

Next
Next

Do You Datafy?