Blog Update: GDPR Red Flags in the Companies House Identity Verification System
In my ongoing efforts to comply with Companies House's new director identity verification requirement, a recent development raises serious concerns under UK GDPR.
Following a Subject Access Request (SAR) to Companies House, I have now received written confirmation that they hold no records relating to my identity verification attempts via the GOV.UK One Login system. Specifically, they state they cannot see:
- Any verification outcome or status,
- Any login identifiers linked to me,
- Any audit trail or escalation records,
- Any way to associate a One Login attempt with a named director.
In short: the government department mandating legal compliance through a digital system is unable to access or confirm whether the process it enforces has even been attempted.
This raises three fundamental GDPR issues:
Lack of data traceability – If no record links a login ID to a director, how can Companies House enforce or prove compliance?
Opaque automated decision-making – GOV.UK One Login is effectively a black box. Users are not informed why they fail verification. No explanation is given, and no human review appears available — potentially breaching Article 22 of UK GDPR.
Shared controller confusion – Companies House refers all identity processing responsibility to GDS/Cabinet Office. Yet it is Companies House that imposes consequences based on that process. This blurred boundary undermines accountability.
With SARs now also submitted to GDS and DSIT, I intend to escalate to the Information Commissioner’s Office if these concerns are not addressed. Legal obligations should never be built on systems that are opaque, untraceable, and immune to oversight.
Edit 8th Aug 2025 - relevant to the issue:
A UK court has ordered HM Revenue & Customs (HMRC) to disclose whether it used artificial intelligence in deciding to reject research and development (R&D) tax credit claims. The ruling came after tax expert Tom Elsbury filed a Freedom of Information request in December 2023, suspecting AI involvement based on rejection letters. HMRC initially refused, citing concerns about aiding fraudulent claims—a stance later upheld by the Information Commissioner’s Office.
However, the first-tier tribunal ruled that the public interest outweighed those concerns, giving HMRC until 18 September to respond. Judge Alexandra Marks found Elsbury’s arguments “compelling,” noting that HMRC’s refusal to confirm or deny AI use risked undermining public trust. Elsbury warned of potential dangers if public large language models like ChatGPT had been used for tax assessments, particularly where sensitive defence-related innovations might be exposed.
The case comes amid heightened scrutiny of R&D tax credits due to fraud concerns, though critics argue HMRC’s approach has unfairly penalised legitimate businesses. HMRC said it is reviewing the decision and considering next steps.
Attribution:
Based on reporting by Emma Agyemang, Financial Times (© The Financial Times Limited 2025).
No comments:
Post a Comment
We automatically delete any SPAM comments. All comments are subject to moderation before publishing. Any SPAM is individually reported to Google as such, this reduces the offending site's Google Ranking.