Assessing the risks of generative AI in the workplace
Amid the exponential growth of generative AI, there is a pressing need to evaluate the legal, ethical, and security implications of these solutions in the workplace. One of the concerns highlighted by industry experts is often the lack of transparency regarding the data on which many generative AI models are trained. There is insufficient information about the specifics of the training data used for models like GPT-4, which powers applications such as ChatGPT. This lack of clarity extends to the storage of information obtained during interactions with individual users, raising legal and compliance risks. The potential for leakage of sensitive company data or code through interactions with generative AI solutions is of significant concern. “Individual employees might leak sensitive company data or code when interacting with popular generative AI solutions,” says Vaidotas Šedys, Head of Risk Management at Oxylabs . “While there is no concrete evidence that data submitted to C...