In the first of this two-part series on the cyber security concerns being raised around social media tools Chat GPT and Tik Tok, we gave you the reasons multiple international governments have banned the latter and why cyber security experts are concerned the former is enabling scammer to professionalise their phishing scams.

In this follow on blog, we hear new warnings on the potential runaway train that is AI and the risks these tools could pose to Irish businesses.

Not so long ago, serial innovator Elon Musk, along with Sam Altman, was one of the key founder/investors in a piece of simulation software powered by AI. Viewed by many as a future-facing tool that would not go amiss in the robot-scape of Blade Runner, the generative bot – conceived of in 2019 and launched in November 2022 – that was to become known as ChatGPT, would go on to become one of the tech phenomena of 2022.

Six months and several million users later, Musk, who since its launch has stepped away from the start-up, has joined a large cohort of AI – Artificial Intelligence – experts calling for a halt to the domino-effect development of AI systems. Why? Because of the risks the ever-advancing systems like ChatGPT pose to society.

Is Elon Musk’s call for a pause to AI development a bit extreme?

Maybe, but not according to the other 1,000 experts and investors in the field who have joined him in signing a letter calling for a temporary pause to be put on AI development, to allow tech researchers to carry out a deep-dive study to identify and assess both the potential capabilities and risks of systems similar to ChatGPT v4.

Published by not-for-profit institute Future of Life, the letter issues a stark warning of the threat to civilisation by AI-systems programmed for continuous (self) improvement. In other words, humans are developing AI systems that will in turn compete with humans for jobs, positions of importance and eventually, fundamental human function. The risk? That our inventions will, with time, eradicate the need for humanity.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity. They should be developed only (when) we are confident that their effects will be positive and the risks manageable. “

Farfetched? Apparently not, according to ’supertreneur ‘Musk. In a recent post on newly acquired Twitter platform, the Tesla-founder and world‘s richest man declared that the latest iteration of the AI bot, Chat GPTv4 was ”not what (he) intended“. Disowning the tool he helped to co-found, Musk tweeted that the direction the software had taken meant it no longer resembled its initial self.

Open AI was created as an open source, non-profit company. It has now become a closed-source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all. What we need is TruthGPT.“

Elon Musk, 15 Feb ’23

The letter, which is also signed by Apple co-founder Steve Wozniak, goes on to call for a “set of shared safety protocols for advanced” ChatGPT, stating that the whole sector should be “rigorously audited .. by independent outside experts.” It also calls for AI systems to come under the control of regulatory authorities.

Cyber Security

At the same time, joining in the chorus of cyber security experts calling out Chat GPT as a vehicle for scammers, Europol has voiced its concerns that systems such as AI can provide criminal gangs with endless opportunities for cyber-crime.

While the UK government has published a policy paper detailing an AI regulatory framework, the EU Approach to Excellence in AI proposed a three-tiered approach of interwoven legal initiatives to shore up trust-based AI development. This three-pronged approach includes “a European legal framework for AI to address fundamental rights and safety risks” specific to AI systems.

According to the commission, its “legal framework for AI proposes a clear, easy to understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk.”

Not happy with the EU’s apparently less than cautious position on AI, Italy has gone rogue, issuing an outright ban on ChatGPT in March 2023. Elsewhere, investment banking giant Goldman Sachs has published a report indicating that as many as 300 million jobs could be negatively impacted (up to loss) by AI systems.

”If generative AI delivers on its promised capabilities, the labor market could face significant disruption. Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work. Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300mn full-time jobs to automation.“

Goldman Sachs

Just how much of a threat to data integrity is Chat GPT?

A command-driven language bot, Chat GPT is exactly that – a system for the purposes of creating and editing written content most typically in the form of:

• Emails
• SOPs
• Code
• CVs
• Articles
• Reviews
• Meta Descriptions
• Translations

Having proved itself a cheap yet invaluable resource to those SMEs whose budget does not stretch to bid-marketing-comms teams, by its nature, Chat GPT raises several concerns from a cyber security perspective.

Aside from being an advantage to plagiarists and ‘hactavists’, ChatGPT:

The big issue with ChatGPT revolves around privacy. The fact is that the AI bot collects and processes data from all commands entered into the system, linking it back to the user’s account before storing it for an unspecified period of time.

Can ChatGPT help your organisation to create professional business content? Most certainly. Its outputs, whilst requiring some human tweaking and refining, are at a level to render them undistinguishable from content written by humans. And the more company data that’s pushed into ChatGPT, the better and more bespoke the outcomes.

Make the AI system work for you by controlling what data is input into it; use specific terminology, use real-business examples of communications to guide the bot into providing you with tailored, organisation-specific content etc.

Can ChatGPT be managed in a secure manner?

If managed correctly, ChatGPT can be used in as safe a way as is possible with all things digital. How?

ISO 27001

ISO 27001 Information Security Management

ISO 27001 Information Security Management System Standard can provide businesses with the tools necessary to ensure the AI system can be used effectively and within the securest parameters possible.

Chatbots pose two key risks to users in the form of exposure to threats and exposing weaknesses. Add to that the rising instance of ‘fake bot’s means your business is hit with double the risk of hacking through malware and/or lack of encrypted/exposed data.
By establishing a robust process framework such as that offered by ISO 27001, Irish businesses can monitor and manage the risk of cyber attacks.

To ensure your use of Chat GPT and other AI systems is as secure as possible, it is essential that the organisation adopts a set of safety protocols and procedures to guarantee information security. Settings such as data storage and retention should be carefully managed, and where possible, limit storage retention periods for the minimum allowed.

Controlled or authorised access should be rolled out to manage which users utilise the bot and what information they can upload into same. Assign roles and create log on restrictions/access controls to manage who can log into the system and which tasks they can carry out.

Where possible adopt a single-sign authentication method across the organisation. This simplifies controlled sign ins by employing a single set of user credentials.

Promote user awareness by providing clear and straightforward security-conscious awareness training, informing employees of the potential risks as well as the known advantages of using AI systems.

By adopting process-driven management systems like ISO 27001, organisations can ensure their personnel follow a risk-based PDCA approach that will enable them to make optimal use of AI systems in the most security-conscious of manners.

ISO 27001 ISMS provides organisations with a framework to ensure conformity and compliance. It will empower your business to better understand the potential threats and risks, as well as benefits to be gained from using AI tools, enabling your employees to effectively manage and monitor all potential impacts both positive and negative.

By implementing robust data security measures such as ISO 27001, your organisation will achieve compliance whilst simultaneously managing system access and mitigating data breaches.

Conslusion

For now, and probably for the future, AI systems are here to stay. An incredible invention, they also pose a huge risk to the security of your data.

By adopting industry-leading best practice methodologies such as ISO 27001, organisations increase their chances of effective AI systems management, allowing them to leverage the best that AI has to offer whilst minimising their exposure to potential threats and loss.

To find out more about ISO 27001, its requirements, benefits and proven outcomes, view our standards page or give our team a call on 01 620 4121 for a free consultation.

*Source Sky News, Irish Times, BBC News, Twitter

Book a FREE Consultation!

In the consultation your will: