Policy Brief
On regulation for data trusts
Programmes
Published by
Interface
July 06, 2021
Executive Summary
Data trusts are a promising concept for enabling data use while maintaining data privacy. Data trusts can pursue many goals, such as increasing the participation of consumers or other data subjects, putting data protection into practice more effectively, or strengthening data sharing along the value chain. They have the potential to become an alternative model to the large platforms, which are accused of accumulating data power and using it primarily for their own purposes rather than for the benefit of their users. To fulfill these hopes, data trusts must be trustworthy so that their users understand and trust that data is being used in their interest.
It is an important step that policymakers have recognized the potential of data trusts. This should be followed by measures that address specific risks and thus promote trust in the services. Currently, the political approach is to subject all forms of data trusts to the same rules through “one size fits all” regulation. This is the case, for example, with the Data Governance Act (DGA), which gives data trusts little leeway to evolve in the marketplace.
To encourage the development of data trusts, it makes sense to broadly define them as all organizations that manage data on behalf of others while adhering to a legal framework (including competition, trade secrets, and privacy). Which additional rules are necessary to ensure trustworthiness should be decided depending on the use case. The risk of a use case should be considered as well as the need for incentives to act as a data trust.
Risk factors can be identified across sectors; in particular, centralized or decentralized data storage and voluntary or mandatory use of data trusts are among them. The business model is not a main risk factor. Although many regulatory proposals call for strict neutrality, several data trusts without strict neutrality appear trustworthy in terms of monetization or vertical integration. At the same time, it is unclear what incentives exist for developing strictly neutral data trusts. Neutrality requirements that go beyond what is necessary make it less likely that desired alternative models will develop and take hold.
Four use cases ( medical data, PIMS, product passports, and agricultural data) illustrate how risk- and incentive-based regulation might look. The goals, whether data is personal, how risky the data sharing is, and the extent to which data is shared differ among these use cases.
The first use case is medical data, which holds enormous potential for medical research to develop new and more personalized forms of diagnosis and treatment. At the same time, the data is highly sensitive and includes current treatment data as well as potential future risk factors. Risks associated with sharing that data include self-censoring behavior, discrimination, and treatment failure if data is not interpreted carefully.
To use medical data more extensively, a legal basis should be created for data processing by scientific and commercial organizations for medical research with data provided by a data trust. To ensure that risks remain manageable, IT security must be certified by a state-supervised body. Furthermore, data access should be designed in such a way that only the data necessary for the research is accessible, and personal identification is reduced as much as possible, for example, with pseudonymization. Organizations that operate in areas that are likely to discriminate, such as insurance and advertising, should be excluded.
The second use case is personal information management systems (PIMS), which are intended to help consumers enforce their rights and interests more effectively. However, consumers have been reluctant to use these services, and companies such as large platforms have found it easy to circumvent these systems. At the same time, there is a risk of abuse in direct dealings with consumers (e.g., through misleading information and menu navigation).
To control the risks and at the same time, support the development of PIMS, we propose to make model terms and conditions for PIMS the basis for certification that identifies them as trustworthy. These terms and conditions should include minimum standards for IT security and provide explicit consent for monetization of personal data. Furthermore, there should be transparency requirements that make the monetary and non-monetary transfer of data visible. The terms and services should also contain restrictions on the use of data by affiliated services such that it takes place under the same conditions as for external services. Overall, the intention is to align PIMS with the interests of consumers. Companies such as social media platforms can then be obligated to cooperate with certified PIMS. With these safeguards, it also makes sense to allow PIMS to represent consumers more comprehensively, for example, to grant or deny consent on behalf of their users, as “authorized agents” under the California Consumer Protection Act (CCPA) do.
The third use case is product passports, which allow products and product attributes to be tracked across the value chain and have enormous potential for promoting a circular economy. Several initiatives promote data-based resource reuse and recycling, but they often fail due to high administrative and financial burdens and limited management relevance.
It is not obvious that there is a need for restrictive regulation of data trusts seeking to offer product passports. Instead, it is more promising to provide legal clarity on data sharing between companies and to use government demand strategically to encourage the use of product passports in government procurement.
The fourth use case is agricultural data, which can help not only increase agricultural yields but also target resources more effectively. This data is increasingly being collected and used, although a major obstacle lies in the sometimes hesitant interest in digitizing farms.
Regulatory restriction of agricultural data trusts does not appear to be necessary. Instead, more incentives can be provided, for example, by making more government data available for use in the agricultural sector.
Recommendations for action across sectors
Regulating data trusts should not increase existing legal uncertainty and complexity but reduce it. This is necessary to incentivize the development of new models and approaches. Additional requirements to establish trust and reduce risks also justify lowering hurdles. Overly strict neutrality requirements inevitably mean that data trusts can be provided only by the government, which creates other potential problems. Instead, it is more productive to use legal restrictions to prevent specific conflicts of interest.
If specified requirements are met, certification can make data trusts transparent, particularly when the risk of overly restrictive regulation is high, and information asymmetries, for example, call for intervention. Another pragmatic way to promote data trusts is to use pilot projects and government demand strategically. However, this method is no substitute for developing new models, especially business models.
Whether data trusts can fulfill the high hopes placed on them depends largely on how the regulatory framework that applies to them is designed. Overall, regulatory proposals for data trusts should aim to make data use and data protection more compatible. To this end, it is helpful to focus on specific risks that are not covered by the existing legal framework; at the same time, it is also helpful to remove hurdles that stand in the way of this goal.