The impact of how the use or misuse of personal data can affect real life is ever more apparent.
After Cambridge Analytica, the world claimed for tighter regulation of how tech companies (and especially social media companies) can use personal data.
China introduced the Social Points System which, to most privacy professionals and Black Mirror fans sounds like an aberration and yet, we don't think twice about the black box that is our credit score.
The tragic news around Caroline Flack's decease after a social media witch hunt and raising concerns about how social media is negatively influencing young minds (for example, with pro-anorexia coaching) is forcing the issue to the regulators table: Should we set clear ground rules for what purposes data can and cannot be used for and, if we do, where do we draw the line so this does not impact free speech?
Aside from the ethical and free speech questions on which I feel ill informed to comment on, from practical privacy standpoint the following questions arise:
1. Who is going to do this?
It is unclear whether companies will continue to self regulate (with ICO holding supervisory powers) or if a public authority (presumably the ICO) will be actively involved in reviewing content.
2. Who pays the bill?
Going through all published content (especially if historic content is in scope) will be a costly exercise.
To do this companies will need to either invest on large teams of reviewers or in very well trained AI (which despite popular belief, is still work in progress) to monitor this.
To add on a layer of complication, if AI can be used, how do we fit this in with the GDPR requirements around automated decision making.
3. What should the criteria be?
The criteria will need to be clear (or at least the penalties for people getting the interpretation wrong need to be reasonable). If companies are forced to take a very risk adverse approach, it is very likely that censure will happen.
4. How do we gain user trust?
There is a very fine line between users feeling confident that a company is using data adequately and the dreaded "Big brother state" perception. Will the burden rely solely on companies or will publicly funded information campaigns inform the general public? Who will ensure that the general public is adequately informed and has a standard baseline understanding of what is going on?
The solution remains unclear and is one to watch out for. But the issue begs the question - is this misuse of social media a company problem or a society problem. And to what extent will a "prohibition" regime solve the issue or give raise to many more.
At the conference he said he supported regulation. "We don't want private companies making so many decisions about how to balance social equities without any more democratic process," he said. The Facebook founder urged governments to come up with a new regulatory system for social media, suggesting it should be a mix of existing rules for telecoms and media companies.