86% of Professionals in Pakistan Use AI Tools, Few Trained to Use Them Safely: Kaspersky Reports

Kaspersky Pakistan survey

According to recent Kaspersky research entitled “Cybersecurity in the workplace: Employee knowledge and behavior” 86% of professionals surveyed in the Pakistan say that they utilize Artificial Intelligence (AI) tools for work tasks. However, only 52% have received training on the cybersecurity aspects of using neural networks, which is one of the critical elements of protection against AI-related risks ranging from data leaks to prompt injections.

The vast majority of survey respondents in the Pakistan (98%) said that they understand what the term “generative artificial intelligence” means, and for many employees this knowledge is no longer just theoretical: AI tools have become part of their every workday. Overall, 86% of respondents use AI tools for work: 68% use AI to write or edit texts and  52% for work e-mails (, 56.5% to create images or videos with the help of neural networks  while (35 use it for data analytics .

The survey uncovered a serious gap in employee preparedness for AI risks. 21% of professionals reported receiving no AI-related training. Among those who had courses, 66% said the focus was on how to effectively use AI tools and create prompts; while 52% received guidance on the cybersecurity aspect of AI use.

While AI tools, which help automate everyday tasks, are becoming ubiquitous in many organizations, they often remain part of ‘shadow IT’, when employees use them without corporate guidance. 81% of respondents said generative artificial intelligence tools are permitted at their work, 15% acknowledged these tools are  not allowed, while 4% were unsure.

To make employee use of AI more clear and secure, organizations should implement a well documented company-wide policy regarding this aspect. This policy can prohibit AI use in specific functions and for certain types of data, regulate which AI tools are provided to employees, and allow only tools from the approved list.

When implementing AI across a company, both complete bans and unrestricted use are typically ineffective. A more effective strategy is to adopt a balanced policy that grants varying levels of AI access based on the sensitivity of data handled by each department. When supported by proper training, this approach promotes both flexibility and efficiency, while maintaining strong security standards,” comments Rashed Al Momani, General Manager for the Middle East at Kaspersky.

To secure corporate AI use Kaspersky recommends organizations to train employees on responsible AI usage. Courses on AI security from Kaspersky Automated Security Awareness Platform can help with adding specialized training to companies’ educational programmes. Provide IT specialists with relevant knowledge on exploitation techniques and practical defense strategies. The ‘Large Language Models Security’ training, part of the Kaspersky Cybersecurity Training portfolio, can enhance both the professional development and the overall cybersecurity of an organization. Ensure all employees have a cybersecurity solution installed on all their work and personal devices used to access business data. Kaspersky Next products protect against a range of threats including phishing or installing a fake AI tool.

Create a full-fledged policy that addresses the spectrum of relevant risks. Kaspersky’s guidelines for securely implementing AI systems can be of help.

Leave a Reply

Your email address will not be published. Required fields are marked *