In this blog, we highlight how, as proposed in the UK government’s National Data Strategy, the CDEI is increasingly working in partnership with public sector bodies and industry on live projects, and building out its capability to help the government to enable and deliver responsible data-driven innovation.
This update follows the publication of our review into bias in algorithmic decision-making, as well as outputs from our AI Monitoring team, who help us to track and investigate developments in AI and data-driven technology. Over the summer, we published the first iteration of our AI Barometer, our review of public sector data sharing, and multiple iterations of our COVID-19 repository, as well as several explainer blogs which have sought to introduce readers to novel uses of AI and data during the pandemic.
Testing and scaling approaches with real-world partners
We’ve begun to work with real world partners across a range of sectors. Through these collaborations, we’re applying, testing and refining tools that we have developed - as well as identifying barriers to responsible innovation, and working out how to address them. This kind of project-based working enables us to gather a diverse range of real-world insight that will inform our advice to government and industry.
The CDEI does not make decisions about the use of particular technologies or dictate what partners do. Rather, we work collaboratively with organisations to put governance principles or mechanisms in place, which enable responsible decisions to be made about the use of AI and data. To inform our advice, we engage across partner organisations, as well as with civil society, industry, academia and the public. We seek to understand the specific contexts in which the different organisations are operating, as well as the expectations of citizens.
The projects we have underway include:
- Police Scotland: Following on from the Royal United Services Institute (RUSI) study we commissioned last year which proposed a framework for the use of new technologies in policing, we are working with Police Scotland to develop a tailored governance framework which helps them address the ethical challenges posed by data-driven technology.
- Bristol City Council: Here, we’re testing the trust matrix that we laid out in our public sector data sharing paper with a real world partner, that has ambitious plans to maximise the value of data held for the benefit of local citizens. The goal is to enable innovation in local government use of data while maintaining public trust.
- Ministry of Defence: We’re working with the Ministry of Defence (MoD) on the responsible use of AI in defence. The focus of this work is to help the MoD create a principles-based framework for the responsible development, use and governance of AI systems across the defence portfolio.
As this work progresses and we start working with new partners, we will compare findings and methodologies, and consider whether governance approaches can be replicated elsewhere. Cross-cutting issues could include the role of independent oversight, approaches to transparency in different contexts, as well as broader insights relating to trustworthiness.
Developing an AI assurance ecosystem
As stated in our recently published review into bias in algorithmic decision-making, we have also initiated a programme of work on AI assurance. Various groups need ways to have confidence and trust in the use of algorithms, in part by having assurance that algorithmic systems are as fair, secure and safe as they claim to be. There are emerging technical approaches to achieving algorithmic assurance, but these are not always suited to the needs of other stakeholders, such as regulators, executives, citizens or consumers. Part of our work on AI assurance examines the extent to which existing models of assurance (such as audit, impact assessment and certification) translate to AI contexts, where models may need to be adapted in order to account for the opportunities and risks of AI tools, as well as the role of standards to support this.
The development of a trustworthy AI assurance ecosystem is essential to the responsible adoption of AI and data-driven technology in the UK. We are working with partners in academia, industry and the public sector to test and refine approaches to assuring algorithms. We plan to bring together a diverse range of organisations with an interest in this area, to help us identify how the overall AI assurance ecosystem in the UK needs to develop, as well as what action is needed to support this.
Building the CDEI’s multidisciplinary capabilities to enable and deliver responsible innovation
The National Data Strategy asks the CDEI to help the government promote responsible data-driven innovation and provide practical support for interventions in the tech landscape. This includes supporting the piloting of approaches to algorithmic transparency and ensuring the trustworthy use of data in the public sector.
To help achieve this, we are building two cross-cutting functions. First, a technology function, which will provide expert technical support to the CDEI’s work and will enable us to deliver technically-focused projects on methods such as privacy enhancing technologies. Second, a public attitudes insights function, as we revealed in our recent blog on public engagement, which will ensure that our advice is grounded in deep engagement with the public, including underrepresented and harder-to-reach groups.
As part of the consultation on the National Data Strategy, which ran from 9th September 2020-9th December 2020, DCMS consulted on the CDEI’s proposed future functions. These include: partnership working; providing practical support for interventions in the tech landscape; and monitoring novel uses of AI and data-driven technology. The consultation also asked how statutory status might help the CDEI to deliver its remit. DCMS is currently analysing the feedback it received and will publish the outcome of the consultation in the New Year.
In the meantime, we’ll continue to build out and test these new ways of working, providing regular updates along the way. We’d also like to hear from individuals and/or organisations who would like to talk to us in more detail about the projects described above. Please get in touch with us at firstname.lastname@example.org or via the comments section below.