I read three pieces of news over the weekend that made me think about the intersection between privacy and technology, and the impact on individuals.  

The first was that a live facial recognition scanning system was being used across a 67-acre area within King's Cross. The area is privately owned and has been recently redeveloped and includes offices, schools and retailers. The Information Commissioner's Office (ICO) issued a statement saying that it is "deeply concerned" about the use of facial recognition technology, and ensuring that any organisations wishing to use such technology comply with the law. The ICO's statement went on to say that the ICO "and the judiciary are both independently considering the legal issues and whether the current framework has kept pace with emerging technologies and people’s expectations about how their most sensitive personal data is used.”

And the second was that Facebook has now ceased the human review of recorded voice messages. Facebook said that the reviews were used to improve products, including the AI that transcribes messages. (Facebook added that it had never listened to audio without device permission and explicit user agreement.) 

It is probably not breaking news that companies are watching us and listening to us, often without our knowledge, in order to develop and deploy technology. But have you ever stopped to consider the person having to view videos or images, or listen to recordings, in order to teach the AI itself? This is where the third article I read comes in to the equation. People are being employed to review and tag a variety of information (including some that must be very difficult to see, including graphic violence and pornography) in order to create AI which can identify and remove unwanted images on social networks and other online services. And whilst the benefits of having such AI working accurately are obvious, what is not clear to many is the toll that creating this AI may have on individuals employed in the process.