In the previous article, we’ve discussed social media manipulation and addictive design till “how the big tech companies are taking our data to manipulate ourselves”. We go further about that in this part.
Free Will? More like “their will”
So, what do they do with all the data they take?
They use it to train models. Those models are used to predict and manipulate the user’s behavior.
Applications of those models are limitless. But the primary objective up until now is generating revenues. And they get those revenues mostly through advertisements.
Personalized advertising
When a system knows so much about you it’s only natural that they can suggest stuff that will get your attention. You may have seen advertisements of products you just thought of buying, or you really need, but not even disclosed the wish of buying anywhere, flooding into your screen. How do they do that? It’s all because the prediction model knows you too well. It’s not so bad, rather convenient, isn’t it? Well, it is until things start going downwards spiraling…
Fake news with consequences
Fake news spread way faster than actual news. It’s because it’s in our very biological nature to advocate outrageous information. And the system, whose prime objective is to generate money, naturally rewards the thing that gets the most clicks and likes, aka the conspiracy theories. And the process goes sequentially. When you believe in one conspiracy theory you judge everything through that lens of perception, and that leads to believing in more conspiracy theories. And we’re slowly plunging into an age of misinformation.
Biasing views: The world through a smaller looking glass
We’re in an age where we have access to a tremendous amount of information. With all the options to choose from, it seems like our experiences as human beings have expanded. But is it really the case?
Now that our personal data can be used to manipulate us, whatever the algorithm decides that it’ll pique our interest, it presents it in front of us in such a manner we most times can not resist. As a result, the information in front of us may seem bigger but our perception of it is getting smaller and smaller. Your viewing window is cleverly modified and curated by the algorithm. There have been talks about how social media manipulated people’s outlook in the previous U.S. election. When services like google serve a filtering purpose, and by that making one thing appear before you as bigger than the other, it’s easy to tweak it to be beneficial for those willing to pay for it.
The war with our biology
a human mind is, primitively enough, biology destined to interact with a local community, a group of 100-300 people. It’s important for an individual to feel appreciated by others in his/her tribe/community. And now the internet allows us to interact with thousands in a day, the tribe has expanded hugely. So many views and opinions to take from. Social approval being dosed on us every five minutes. It’s easy to be confused and lose perception amidst this chaotic sea of interactions.
That “Ting” notification sound
Pavlov, the famous scientist, conditioned a dog’s behavior with the help of a bell. He rang a bell, then fed the dog a treat and/or rewarded him. A few times later, whether he did or did not give those rewards, ringing the bell would create positive reinforcement in the dog.
In the same way, social media is conditioning us. Imagine if you’re in a class. You’ve got a “ting” notification sound from your phone lying on the table in front of you. Know what, forget the sound. The phone is just lying in plain view. Every five minutes, you’ll feel the urge to check your phone for a new notification or a message from a friend. It’s not something you can control, rather something a carefully formulated manipulation algorithm has deep-seated in your brain. That’s an entire discipline called “Growth Engineering”. Teams of engineers hacking into your psychology to make the system grow on you.
Bottom line is, social media is a drug. The one with the biggest community of users, in fact.
Where lies the Ethical Dilemma?
The data that we’re feeding these gigantic systems are getting better and better each passing second. In return, they’re providing us with convenience. Tremendous scientific and AI advancements have been made possible because of all that data. We can get directions on google maps easier than ever now. Search engines have become more precise and intuitive on what we wanna find exactly. Stopping them from feeding on these data might be halting scientific progress. But then again, manipulating people to make them produce a biased domain of data is also a potential danger due to all these manipulative maneuvers. It’s a huge paradox, really.
Have these platforms become the Frankenstein even those who created cannot control anymore?
Is there a singular entity or person to blame? Absolutely not. Technology is just a tool. People use it for either good or corrupted reasons. And those who developed these manipulative structures were not aware of the dangers at all. Nobody, not even the engineers of those “Facebook’s recommendation engine” thought this system written by a hundred people will control the lives of 2 billion. Technology was in too early a stage to guess such a monstrous outcome. And the core goal of revenue generation seemed like a harmless idea then. Oh, how times have changed!
The ones who helped create these platforms are not immune to its charms as well. Tim Kendall, former president of Pinterest and executive of Facebook, admitted there were times when he came home and couldn’t get off his phone, couldn’t make up time for his kids and family. It’s a great irony, falling prey to the things you helped create. The system has gained such an enormous momentum that basically, there’s no stopping it now. Even knowing how the tricks work, people are still susceptible to them. Every once in a while we come to know about these dangers, get worried for two minutes, and then casually go back to scrolling through Twitter or Facebook like nothing ever happened.
Are Countermeasures possible?
If you mean “countermeasures” like taking down all social media or putting off their revenue generation malpractice, then the straight answer is “No”. That Frankenstein has become far too big, and many lives and important things are associated with it. The remedy which is still possible is fine-tuning the systems to be less destructive for us and society as a whole. The ways we can do that is :
- Get aware of the problem. Admitting that it exists.
- Don’t trust everything you read on the internet.
- Fight the urge. Limit the data we feed to the system for our health and well-being. Turn off push notifications, ration internet usage, don’t click on ads, etc.
- Searching in text, instead of clicking on recommendations.
- Form a collective stance against manipulation and force the big techs to change their algorithm to be “less manipulative and revenue-oriented”
- Tech giants need to understand that “making people see more ads so that we can make money” is not healthy and look for alternatives.
- Ensuring Proper Governance of personal data.
- Psychologists and ethicists need to be on the development/ maintenance team of any software that is susceptible to being too addictive.
Humanity needs to figure out an alternative to these extreme manipulation tactics. Otherwise, it will keep getting worse. Let’s hope the regulations for a “ better internet society” emerges soon.