Given the chaotic nature of data in motion, designers need to understand that users may not fully consider (or be aware of) the entirety of their data’s use. It is important that data-driven products are designed to meet existing user expectations—but also engage in expectation-setting for new use cases.
Understanding the user’s needs and desires
“Persona modeling” for human actors
To ensure that the privacy and harm risks for all parties in a data supplychain are properly addressed and managed, it’s essential to create maps of the various emotional, social, and functional tasks humans want or need to do. Developing “persona models”, which are mapped to real-life interviews after product or application releases, is a critical component of agile development. Such models are also valuable to agile approaches for discovering harms previously unknown to developers.
For example, transgender and domestic violence use-cases were not fully considered in Facebook’s push for users to go by real names (or “authentic names” in Facebook’s terminology). Because Facebook did not fully conceive of the many ways people use names—or why they might not share their legal names—users were required to title their profiles with their legal names. This presented a difficult situation to these disadvantaged users, and prompted a group of individuals to write to Facebook expressing their concern.³ Facebook has since introduced a more nuanced interpretation of its naming policy, providing a doctrinal guide for both users and Facebook staff which indicates the “why” of authentic names while still precluding false names used for impersonation.⁴ It outlines a much broader set of options for identity verification and appeals, and demonstrates some line of sight between Facebook’s users, front-line employees, developers, and leadership. In Positive Computing: Technology for Wellbeing and Human Potential, the authors explain that research into “proving” a positive or negative outcome of any one technology is nearly impossible because of the complexity of modern systems.⁵ Instead, the authors (and many multidisciplinary technologists) guide designers of such systems to focus on intentions and outcomes.
The why of a user’s use of a given technology is as important as the how of their use. If a user is clear about why they are using a system, and the designers of that system also have some idea of why the user is sharing data within that system, proactive, positive choices about the universe of data transformations to engage in—and those to avoid— can be baked in. For example, a professional social network with multiple subscription options or profile types for its users, such as for job-seekers vs sales professionals, could infer (and verify) that users who are job-seekers might react negatively to disclosure of new activity on their profiles. This could alert their current employer that they are exploring other jobs—potentially causing unnecessary stress or even premature loss of employment. Conversely, sales-focused users might react positively to the disclosure of the activity on their profiles. Proactively (and explicitly) asking the “why” of a user’s disclosure of data about them, and verifying the “why” when new activity happens, can drastically lessen the likelihood that user intent will be misunderstood.
Persona modeling for machine (or “thing”) actors
The Internet of Things can in some ways be better thought of as a Social Network of Things—networks of devices and data with their own relationships and decisions to make. By modeling the various “needs” of a device to connect with other devices to pass data, and use that data to make decisions, it is easier to identify data paths that are potentially vulnerable to unauthorized sharing, or to which parties might gain access without approval.