The what if scenario which I chose to develop for my Speculative Design project is the one where the use of surveillance technologies and in particular of facial recognition systems will be democratised and applied to different fields. From security to common use.


The idea is that people will have facial recognition technologies installed on their personal devices and through that they can identify other people and retrieve information about them. Some technologies such as Google Glasses would be ideal for this process, which would be more discrete than pointing a person with a mobile.

The hypothetical App would be able to communicate information regarding the person you are looking at, personal and reciprocal.

Personal: name, age, nationality, etc..

Reciprocal: whether you know each other, where you met, how you met, etc…

As I argued in my previous post, such automatic process will be likely to have a bad effect on people, who might lose their ability to recognise others.
This first point it’s actually quite symbolic and, although it might appear a bit weak or even paradoxical, I’d like to use it to criticise the influence of technological progresses on human behaviours, relationships and cognitive systems.
On how people rely on technologies for everyday tasks and are losing their skills to carry out determined activities.
Still in my previous post, I made the example of maps: people are so used to look at interactive maps and GPS systems that they are gradually losing their orientation and wayfinding skills.

The reason why they are losing these skills is simply because those processes which were meant to be under the sphere of deliberative thinking are turned in automatic processes.

“The deliberative processes always come into play after the automatic but, in Kahneman’s words, they are lazy. They will go along with the automatic processes unless there is something surprising or irregular and/or we are operating in novel circumstances or performing tasks that require vigilance and/or deliberation. […] Automatic processes do not require active control or attention.”

Noel Sharkey, Towards a principle for the human supervisory control of robot weapons

However, reconnecting to the brief where the project stems from: Autonomous Weapons and Meaningful Human Control, I’m not only speculating that these apps/processes will be automatic, but they would also be autonomous. They would think with their own artificial mind, leaving little or no space at all to humans to intervene in such process.

People would be filed by these machines, which will be collecting information from the Internet, from databases they will have access to and every other possible source. Moreover, the machines will be able to create connections, express opinions and influence humans with their visions.
We can see that nowadays, humans always more rarely question machines, taking for granted that their technology is perfect or almost so. Therefore, human opinions regarding another person might be completely dictated by machines and metadata.

Human relationships will inevitably change, they will be governed by a mechanical process, there won’t be any space to deliberative thinking which characterise human activity.

Here I listed some possible consequences, which could be derived by this process.


On the contrary of machines, humans are characterised by limited memory, they tend to forget stuff. Even though for many activities this can be considered as a weakness, regarding social relationships it is surely a positive aspect, this is what makes people more “human”.

Assuming that people would remember always everything, interactions between each other will be much more complicated than nowadays.

For example, people don’t always care about their behaviour towards acquaintances, most of the time they try to be as nice as possible, but still there isn’t any commitment such as with a friend, a colleague or a family member.
However, one of the thing which permits most relationships especially with acquaintances, is the fact that there isn’t an extreme attention to details: people don’t remember any eventual incorrect behaviour, even because they probably didn’t even care about it.
But what would happen if people will be reminded every time by a machine whatever a person did to them or might have thought of them?
I reckon that it would be very difficult to establish relationships, or probably if not necessarily difficult, it would be completely different.

The things I am referring are very futile and small, but their impact is not to underrate, especially if exploited by technologies:

i.e. Last month, for three times John didn’t greet Sara while he was passing by, pretending maybe to don’t know her.
i.e. While Matt was in the lift, he saw David entering the building hall, but didn’t wait for him, pushing the button of the lift going up.

All these things are easily forgotten because not relevant, but what happens when you see a person again and the app will have a defined idea of that person, based on their personal and reciprocal information, making its assumptions and reminding you of all the small bad things? Or without even reminding you why, the machine will just tell you that this person doesn’t like you and so on? I guess things wouldn’t be so smooth anymore.

This obviously could work also for opposite situations, people could remember good stuff that you have done, but some actions might be also involuntary.

Again, going back to the discourse of Autonomous Weapons, this sort of technology, might identify your enemies or what’s more dangerous even create enemies from nowhere. They could incentivise violence and bad behaviours, especially in individuals that are easy subjects to anger and aggressive conduct.


The negative aspect is not only the “reminding” from the machines, but the consequences that this deterrent would create. People would start to watch out for every behaviour they have, start to be less natural, as they know they would be somehow recorded. They would feel much more under pressure, they would probably start to avoid social life or just don’t see it as easy as it is at the moment.
If already now people might be concerned for outputs of pictures and video taken in a particular situation, such as a night out drinking a bit too much, imagine when everything will be stored. Will people be still really free to have fun, be alternative and just don’t give a s***?


With this technology, it might be possible to discover things about the person even before asking. There would’t be any space for basic questions: I only need to stand in front of you and point your face with my device or smart lens to discover about you.

Even now, many questions are avoided as many information can be retrieved online from social networks profiles, it is not as direct as standing in front of a person, but quite close to it.
We could arrive to this absurd situation that two people will be just standing in front of each other without say a word, getting to know each other through their devices.


Obviously these technologies will bring to the destruction of privacy. People in public would be subject to million of smart eyes, not only security cameras and surveillance systems, but a population able to identify exactly who you are. The concept of privacy might disappear completely or just be applied to some other specific things.