Avoiding harm and continuing to seek and clarify informed consent is vitally important. Mitigation of harms takes many forms, and can involve nearly every part of an organization—especially when harms are not immediately noticed or their scope is large.

Scope control and fail-safes

Data-related harms usually occur in one of two categories: unintended disclosure of raw data (such as photos of a user or their credit-card information) or improper decisions that have been made based on data about a user. These decisions can be made by humans (such as a decision on whether or not to prescribe a medication), hybrid decisions (such as a credit reportinfluenced decision whether to offer a loan) or machine decisions (such as automatic re-routing of vehicles based on traffic data). Strategies to mitigate such harm and respond to it when it occurs depend on the types of decisions being made.

Revocation and distributed deletion

While pre-release design is critical to meet the “do no harm” expectation, designing with the ability to adapt post-release is equally critical. For example, a social network in which users directly offer up data about themselves (whether for public or private consumption) would likely be launched with privacy controls available from day one. However, the system’s owners may find that users are not aware of the available privacy controls, and introduce a feature whereby users are informed/reminded of the available settings. In such a case, users should be able to retroactively affect their past shared data—i.e. any change a user makes to their privacy settings should affect not only future shared data, but anything they have previously shared. In this way, a system that was not initially designed to allow fully informed consent could be adjusted to allow revocation of consent over time. However, such a capability requires the system’s designers to have planned for adaption and future changes. And, given the interdependence of various software features, if a breach or unintended effect occurs, plans should include how data can be removed from the entire data supply chain—not just one company’s servers.

One practice for mitigating the risks associated with data usage is coordination between stakeholders in webs of shared computing resources. This collective coordination and alignment on key standards is known as “federation” in the IT context. Standards for ethical and uniform treatment of user data should be added alongside existing agreements on uptime, general security measures and performance.²⁰ Federated identity management (and, as part of that management, single-sign-on tools) is a subset of broader discussions of federation. Identity management is another critical component of managing data ethics, so that stakeholders accessing customer data (as well as customers themselves) are verified to be permitted to access this data.

Communicating impact

As part of an investigation into a 2015 shooting incident in San Bernardino, California, The US Federal Bureau of Investigation (FBI) filed a court request for Apple’s assistance in creating software to bypass security protocols on an iPhone owned by one of the perpetrators. Apple’s public letter to its customers explaining its decision to challenge that request provides a glimpse into the complexity and potential risks. How society addresses matters such as these will be central to shaping 21st century ethics and law. Apple chose a very intentional, bold, and values-driven stance.²¹ Perhaps most relevant to the issues of informed consent and intent to do no harm was Apple’s choice to not only make a statement about its position and intention, but also to explain in layman’s terms how the current security features function, how they would potentially be subverted by the proposed software, and what the future risks of having such software in existence would be, should the company comply with the FBI’s request. In the words of Tim Cook, Apple’s CEO, “This moment calls for public discussion, and we want our customers and people around the country to understand what is at stake.”

Brands that declare responsibility for educating their users about data security and use have an opportunity to build trust and loyalty from their customers.

 

This move is in stark contrast to the often lampooned iTunes EULA (see R. Sikoryak’s “The Unabridged Graphic Adaptation [of] iTunes Terms and Conditions”), which may be seen as a small annoyance to users as they scroll past and accept without reading in order to access their music.²² Like most EULAs, the dense legalese gives users a difficult time in determining the significance of any changes they need to “review” and “accept.”

As users increasingly realize the importance and value of securing their data and sharing it selectively and with intent, brands that declare responsibility for educating their users about data security and use have an opportunity to build trust and loyalty from their customers. By focusing on more than just removing themselves from liability through processes that occur as a technicality in the user flow, and instead utilizing proactive measures (as Apple does by reminding users that location data is being used by apps) companies can establish themselves as industry leaders in ethical data use.