Discussion with Gila and a podcast have led me to a new definition for privacy.
Privacy is the control over information to ensure that
- No physical harm results for the person, their family, loved ones, friends, or property.
- No financial, emotional or other harm is done to the person, their family, loved ones, or friends.
- No harm is done to any social relationships of the person, their family, loved ones, or friends.
My new definition is driven partially be a story from a Google security expert at an LSE panel session. This was during one of the publicity bursts for computer security. The press was putting about stories about how the public behavior showed that they didn't really care. She had gone to a morning focus session with some members of the public, and then attended a computer security conference. The public was actually very concerned about the potential for the use of private information by stalkers. When she mentioned this to the computer security experts, they laughed, said this shows how little the public understands, and continued on their chosen path.
I think the public was right, although they got some of the details wrong. So I've rephrased this as a harm prevention goal. That's the root requirement.
Gila argued that the requirement includes personal consent and control. I still disagree. I consider personal consent and control to be part of the mitigation strategy.
Consider an imaginary world where there is magic pixie dust that can be sprinkled on information. The magic pixie dust can read minds, understand social relationships, and predict the future. It ensures that the information is never revealed or used for anything that will cause any of the harms listed above. In that imaginary world I do not think that any person would complain that they were not controlling or consenting to data releases. The magic pixie dust will do a much better job, and with much less burden on the person. People have no ability to read minds, very limited ability to predict and future, and even find much simpler tasks like understanding the complex implications of consenting to a particular policy to be very difficult.
But there is no magic pixie dust. Current technology is unable to meet most these goals. Centrally designed policies are a very limited mitigation. Personal consent and control are another mitigation to deal with the many details that a centrally designed policy will not have. Considering consent and control as a technology, the inability to read minds, predict the future, etc. still leave many potentials for harm. Further mitigations are needed, e.g., audit controls and process feedback loops.
By considering consent and control to be a technology for mitigation we have the established framework for recognizing that more mitigations are needed. Considering consent and control to be the driving requirements closes this path, since the system will have fulfilled its requirements.
There are other social requirements. The right of personhood is a requirement that every person be able to make the decisions that affect the future path of their life. In the absence of the magic pixie dust and the inadequacy of privacy mitigations, this right of personhood means that the person should have the control to decide what risks of harm they are willing to accept.
The privacy requirements are also subject to societal restrictions. I will add the statement:
- Society may choose to inflict harm on a person rather than suffer societal harm. For example, the privacy of criminals will be reduced rather than accept unrestricted social harm from crime. There are also more positive motivations, such as the infliction of privacy harm by public health organizations so that other people will have less harm from disease.