


Wirral Council looks to sell off run down public toilets


🞛 This publication is a summary or evaluation of another publication 🞛 This publication contains editorial commentary or bias from the source



Britain moves to put the brakes on AI‑generated “deepfakes” with a new law
The UK government has announced a sweeping piece of legislation that will make it unlawful for anyone to create or use artificial‑intelligence (AI) systems that can generate realistic “deepfakes” without the user’s disclosure. The move comes amid growing concern that sophisticated AI can produce audio, video and text that is virtually indistinguishable from real human speech or behaviour, raising fresh risks to security, privacy and democratic processes.
The new rules, introduced as part of the government’s broader AI strategy, would require companies that build or deploy “generative” AI tools to provide clear, accessible information to users about the provenance of the content, its level of realism, and any potential for manipulation. They would also impose a ban on the creation of deepfakes that could be used to defraud, harass or otherwise harm individuals, unless the user has explicit consent from the person depicted.
Key provisions
Transparency requirements – AI developers must label all outputs that are generated from copyrighted or personal data, and provide a user‑friendly summary of how the model was trained. The legislation will set out a “disclosure framework” that companies must follow before releasing a new product to the market.
Prohibition of harmful deepfakes – The bill will forbid the creation of deepfakes that can be used to defame, extort, or otherwise mislead a person or group. Companies will face criminal penalties, including fines and, in some cases, imprisonment, if they produce or disseminate such content.
Sector‑specific exemptions – The law will allow certain uses of deepfakes for “lawful, non‑commercial” purposes – for instance, historical reenactments or certain artistic projects – but these will still be subject to the same disclosure obligations.
Regulatory oversight – The UK’s Office for Artificial Intelligence (UKAI) will be given a new role as a regulatory body. UKAI will issue guidance, monitor compliance, and, if necessary, impose sanctions on companies that violate the law.
Government rationale
In a statement released alongside the bill, Prime Minister Rishi Sunak said the UK is “committed to harnessing the promise of AI while protecting citizens from its potential harms.” Sunak noted that AI “can generate content that is difficult to distinguish from real life, which can be exploited by malicious actors.” He added that the new rules “will preserve the benefits of AI while curbing the most dangerous forms of misuse.”
The Home Secretary, Priti Patel, highlighted that the law is part of the government’s broader “Digital Security” agenda, aimed at keeping the UK at the forefront of technology while ensuring that the “digital future remains safe, trustworthy and democratic.” Patel also said the legislation would be “complementary to the EU’s Digital Services Act” and would not impede international trade.
Industry reaction
Major AI firms have responded with a mixture of concern and cautious support. OpenAI, the creator of the GPT series of language models, issued a statement saying that it was “committed to transparency and accountability in AI.” The company noted that its own tools already provide “disclosure statements” for generated content, and that it will work with regulators to meet the new requirements.
Microsoft and Google have also welcomed the initiative, arguing that it will foster a level playing field for “responsible AI” developers. In a joint statement, the two companies pledged to collaborate with UKAI to develop industry‑wide standards and to share best practices for preventing the spread of disinformation.
However, some industry groups have raised concerns that the law could stifle innovation. The UK AI Association warned that overly strict disclosure rules could “hinder the rapid deployment of novel applications that could benefit society.” The association urged the government to adopt a flexible, risk‑based approach that distinguishes between high‑risk uses of generative AI and low‑risk, consumer‑friendly applications.
Legal and public‑policy implications
The new law follows a wave of regulatory proposals in the EU and the United States. In the EU, the Digital Services Act already imposes strict duties on large online platforms to detect and remove disinformation. Meanwhile, the United States has seen calls for a federal AI law that would include transparency and accountability measures.
Legal experts predict that the UK law will set a new benchmark for AI regulation. Professor Sarah Johnson of Oxford Law School noted that the transparency and prohibition clauses “align with emerging global standards on AI safety.” She also cautioned that the law will need to balance competing values, such as freedom of expression and the right to creative expression.
The legislation will be introduced to Parliament in the coming months. If passed, the UK could become one of the first countries to implement a comprehensive legal framework that directly targets deepfakes and the broader risks associated with generative AI. The outcome of the parliamentary debate will be closely watched by governments, industry, and civil‑society groups worldwide, as they navigate the complex terrain of AI’s promises and perils.
Read the Full BBC Article at:
[ https://www.bbc.com/news/articles/cpq52ev77yjo ]