TikTok's AI transition leaves British content moderation team at potential risk
In a move that aligns with the broader tech industry trend, TikTok is restructuring its operations, focusing more on automated systems and less on human moderation teams. This change is particularly evident in the UK, where hundreds of trust and safety jobs are being cut.
The video-sharing app claims that more than 85% of harmful or policy-violating content is already removed automatically by its AI. However, the greater question is whether automation alone can meet the rising bar of safety, cultural sensitivity, and accountability demanded by regulators and users alike.
Workers express concerns that users, especially minors, may be exposed to greater risks if AI becomes the first and last line of defense. The Communication Workers Union (CWU) has criticized the job cuts, arguing that human moderators are uniquely equipped to catch subtleties that may evade detection by algorithms.
The restructuring involves consolidating moderation into fewer regional hubs and expanding reliance on artificial intelligence. The names of these hubs have not been publicly disclosed in detail. TikTok insists that affected UK employees can apply for other roles within the company and will be given priority if qualified.
The UK's Online Safety Act, enforced from July 2025, requires platforms to implement robust age checks and actively remove harmful material. Breaches of this Act can result in fines of up to 10% of global turnover. The UK's Information Commissioner's Office has launched a major investigation into TikTok's data practices.
TikTok's geopolitical scrutiny, due to its Chinese parent company, ByteDance, amplifies Western anxieties over data governance and content manipulation. The global backlash suggests that trust and safety cannot be treated as a cost center without consequence. Any misstep in moderation risks becoming part of a larger narrative about TikTok's ability-or inability-to safeguard democratic and social norms.
The job cuts in the UK's trust and safety division follow similar layoffs in Berlin and other European countries. The job cuts coincide with the UK's Online Safety Act, which imposes strict compliance demands and heavy financial penalties for failures.
The restructuring is part of a coordinated global strategy to streamline moderation and rely more on AI. However, TikTok's gamble of accelerating its reliance on AI at the expense of human teams may deliver short-term operational gains, but the long-term test will be whether it can convince stakeholders that AI is capable of protecting over 30 million UK users. The stakes are high, as TikTok's European revenues are growing, but its reliance on automation risks eroding trust among regulators, workers, and users.