Coming under intense scrutiny.

Banned in multiple countries.

Like Icarus soaring towards the sun, have social media platforms flown too high, too quickly?

As successive world governments continue to ban Chinese-owned TikTok, and AI-bot Chat GPT causes deep divisions amongst teaching and cyber security fraternities alike, we ask how safe are the socials and do they have a place in the business world?

In the first part of this two-parter, we ask why successive governments around the world are banning ‘fun apps’ and take a look at how chat bots are enabling ‘hacktavists’.

Will TikTok Get Banned Across the Globe?

Banned from official devices by the governments of Canada, France, Great Britain and most recently the United States – whose Congress lawmakers shredded its CEO at a hearing on 23rd March, 2023 – Tik Tok’s downward spiral has been as quick if not quicker than its astonishingly rapid upward trajectory to top of the social media leaderboard within a year of its launch in late 2016.

Building an almost instantaneous cult following, TikTok’s user base now stands at a staggering one billion active monthly users: compare that with its 650 million users just under two years ago! For reference, the Twitterati, who today number a paltry 300 million, pale in comparison. The medium of choice of influencers, marketeers and politicians and basically any individual or company worth its salt, video has become the de facto means of selling a concept, vision, product or service to 2020’s consumers.

Consumers who sign up, log in and browse, mostly without giving a second thought to the potential cyber “evil that lies behind” the many portals through which they share so many private facets of their lives. The very same private data which the EU GDPR strives to protect and notwithstanding the ongoing rise in cyber-crime, continues to be sprinkled liberally across the ‘metaverse’.

Just why have so many governments around the world banned their employees from installing TikTok on official devices?

According to the New Zealand government, “the Service has determined that the risks are not acceptable in the current parliament environment”. While in the United Kingdom, Westminster has banned TikTok from “all parliamentary devices and the wider parliamentary network”. The reason given. “The need for robust cybersecurity in light of potential tracking and privacy risks from certain social media apps”.

As for France, well, ooh la la. The French have gone not one but several steps ahead of their lawmaking counterparts by banning not just TikTok, but all social media apps from being installed onto government devices.

A spokesperson for the French government said in a statement released to media at the end of March 2023:

“Recreational applications do not have sufficient levels of cyber security and data protection, and therefore constitute a risk to the protection of the data of French administrations and their public officials,” it said, whilst also calling into question the data security and storage practices of all social media companies.

Norway, the Netherlands and Canada have also joined the wave of global governments enforcing a total blackout of TikTok (and many other ‘recreational apps’) from official digital equipment.

AI Data Collection

TikTok's Aggressive Data Collection Strategy

While it might seem that the distrust of Chinese-based companies like TikTok has grown in recent months, as far back as the early 2020’s both the British media and members of the UK government were growing increasingly wary of what was seen by many in the West as “aggressive data collection”. Are their suspicions unfounded? Apparently not.

The Chinese-social media app mines a huge amount of personal data whilst running on devices: information such as location, calendar and other non-essential data such as details of other installed apps. Once active, TikTok can even access the serial number of the SIM active on the device! Why?

One must ask why a social media platform would require access to so much potentially harmful personal information. Data which is even more of a risk when it is business or government-related.

During the recent hearing with Congress, the TikTok CEO, Zi Chew was anxious to focus on the company’s ongoing ramping up of its security practices. However, the US lawmakers were having none of it, preferring to bear down on the fact that the platform’s parent company ByteDance Ltd., has in fact access to US data.

Ipso, if employees at the Chinese HQ have access to US data, then it stands to reason that this information has also been accessed by Chinese government officials.

What’s the issue with ChatGPT?

Well, apart from the fact that yet another “glitch” allowed users to see snatches of private bot chats to which they should not have been privy? Which must lead on to conclude that as it the situation with TikTok, techies at ChatGPT HQ – OpenAI (another Elon Musk venture also backed by Microsoft) – have access to active private user content, allegedly contradicting the company’s policy stating that only anonymised data is accessible to its engineers.

Launched at the beginning of December 2022, ChatGPT gained one million users within its first week. As of the end of March 2023, the AI bot is tracking 13 million daily hits.

Seen by many as a ‘plagiarism tool’, enabling students to artificially create essays, theses and project work, ChatGPT has many weaknesses, not least of which is its security. (Another big issue is its apparent tendency towards ‘behavioral bias’, if one can use the term ‘behavior’ to describe the actions of a piece of software).

The AI bot or Generative Pre-Trained Transformer, has now come under the scrutiny of cyber security experts on foot of a reported rise in linguistically-sophisticated phishing scams.

Typically grammar-corrupt, email scams tend to read like a bad fairy story or communications notice drafted by a first day intern. Riddled with errors and mis-used phrases, scam emails tend to be obvious through their lack of grammatical excellence. Now, with the advance of the chat bot, scammers are upping their game, and very significantly according to cybersecurity watchdogs.

According to a group of data management experts, ChatGPT has been a boon for cyber criminals who are using the tool to draft extremely professional-looking text. The bot, they said, is acting as an enabler for hackers seeking to extort money from businesses via Ransomware. ChatGPT, they believe, has helped hackers to widen the scope of their ‘target base’ as they achieve more linguistically slick and sophisticated emails.

Should we be concerned? Well, Sam Altman, CEO of ChatGPT parent company OpenAI, seems to think so.

“We’re a little bit scared,” he told ABC News. “They (chat bots) could be used for large-scale disinformation and offensive cyber-attacks”. And that’s coming from the horse’s mouth.

According to one of the AI bot’s original investors Elon Musk, ChatGPT (and its ilk) “is more dangerous than a nuclear weapon”. “I’ve been calling for AI safety regulation for over a decade,” he posted on Twitter a few months ago. “There is no regulatory oversight.” No regulatory oversight for a bot that falls outside the scope of the EU GDPR (it is currently GDPR non-compliant) and has already been hit by some worrying glitches. In part two of this focus on the rise and fall and rise of social media apps and chat bots, we look at the risks these tools could pose to your business.

How CGBC and ISO Consultants can help

CG Business Consulting offers the internationally recognised dedicated information security certifaction, ISO 27001.

This gives you the tools and processes you require to to secure all financial and confidential data effectively, thus minimising the likelihood of it being accessed illegally or without permission.

Book a FREE Consultation!

In the consultation your will: