Colleagues and friends are most likely to know your secrets, including your employer’s confidential information.
However, with the proliferation of artificial intelligence (AI) in the workplace, employees are increasingly sharing personal information with their new friend – the neighborhood chatbot.
And organizations are now facing an unprecedented security threat to their data that they are unprepared for.
When companies realize this, the measures seem to be on the extreme ends: either banning programs like ChatGPT or the growing number of AI-powered writing tools in the workplace, or not regulating employee use of AI at all. There is hardly a middle ground.
In addition to confidentiality rules, sharing information with AI tools can also present an opportunity for cybercriminals.
AI conquers the world of work
CybSafe, a behavioral sciences and data analysis company, conducted a study among 1,000 office workers in the UK and US. She focused on him Use of AI in the workplace.
The investigation revealed:
- 50% of respondents are already using AI tools at work – a third weekly and 12% daily;
- 44% use the tools for research, 40% for writing reports, 38% for data analysis, and 15% for writing code;
- 64% of US office workers have entered work information into a generative AI tool, and another 28% are unsure if they have;
- 38% of US users admit they share information they wouldn’t readily tell a friend at a bar;
- In the UK and US, 69% and 74% of respondents respectively think the benefits of using AI outweigh the risks;
- A significant percentage of participants would continue to use AI tools even if their employer banned them;
- 21% cannot differentiate between human-made content and AI-generated content.
Alongside employees who find AI tools helpful and productive, there are others who find them useful – cybercriminals.
dr Jason Nurse, Director of Science and Research CybSafe and Associate Professor at the University of Kent, said: “Generative AI has tremendously lowered the barriers to entry for cybercriminals looking to exploit companies.”
“Not only does it help create more compelling phishing messages, but as workers become more familiar with AI-generated content, the line between what is perceived as real and what is perceived as fake will significantly narrow.”
Companies with a “No AI” list
- The aerospace and defense company Northrup Grumman has banned AI tools until they are fully tested;
- After employees uploaded confidential code, had to Samsung pay tuition and ban AI tools;
- Verizon has blocked access to AI tools within its systems;
- JPMorgan Chase has restricted the use of AI tools, but details are not available;
- Deutsche Bank has locked all AI tools;
- Accenture bans its employees from using AI tools in the office;
- Amazon encourages its employees to use its proprietary bot CodeWhisperer to use, but has apparently not yet blocked access to AI tools.
Conclusion
AI is making its way into the world of work, but it offers new targets for cybercrime. Companies and their employees must therefore always be vigilant and bring their risk management up to date.
At the same time, it is extremely tempting for companies and teams to make AI tools available to their employees.
Balancing these conflicting truths, then, is a difficult but necessary endeavor.
Crypto exchanges with the lowest fees 2023