So it’s been a while since I’ve last posted and I wanted to take a step away from straight up data analysis. One of the ideas that’s been bouncing around my head the past couple weeks is how our thoughts and actions are affected by information that we are exposed too. These are my thoughts on the subject:
Hypothesis: The machine learning algorithms that generate and control how we view information via social media are crude lenses that affect how we think and act. Because our own thought and thus our own behavior is used as input for these algorithms, a positive feedback loop is created that perpetuates the money making goals of these machine learning algorithms. As this cycle continues, our own behavior and thought patterns begin to mimic the analytical structure and goals of the machine learning systems. Thus we are transformed into perfect consumers.
Let me take you through how I arrived at this conclusion:
Users and Information
First, there exists Information in the world – ideas, concepts, events, and media - that represents these ideas/concepts/events. Then there is us: the Users. We belong to a certain demographic, we browse the internet, and spend money and participate in our culture in a certain way.
The (machine learning) Model
Users and their behaviors are inputs into a very large machine-learning Model. The goal of this Model is to increase page clicks ($) and time spent on the app (also $). The Model filters and arranges Information on social media in such a way that this goal is attained. Models that don’t do this effectively are scrapped and new ones are trained. The relationships presented to Users on social media only reflect the Model’s limited understanding of the preexisting associations/correlations between the Information and the User.
Okay, so what?
We, the Users, decide what to make of the media being presented to us, forming thoughts and opinions of the events/ideas/concepts shown to us. The thoughts and opinions that we form about the info only reflect the Model’s limited understanding of the preexisting associations between the language and imagery used in the content.
For clarity’s sake, let me take a moment to fully describe how this can happen with an example: It’s Pride week, and all over the US, young millenials like me are taking to the streets and marching in solidarity for equal rights. My timeline is infused with pictures of smiling young people holding up rainbow flags with captions along the lines of “love wins”. How did my timeline decide that I should see this content?
Since I am connected with all of these people via my location, education, age group, race, gender, spending habits, and posting habits, the Model will assume that I too espouse these views (indeed I do.). So, knowing that I am highly correlated with all of these happily smiling Users, the Model arranges the media on my timeline so that I see all of this happiness, and begin to associate it with the values that that media espouses. Doing so serves the Model’s purpose, because I, for one, like feeling happy, and if I feel happy when I am on social media, then I’ll be on social media more often.
The problem with this is that I am only seeing this because of shallow correlations. For example, maybe because I am white, I don’t see as many people of color that were also marching on my timeline. This could be for many reasons: I am not connected with tons of POC, their messaging/language is different than mine, it’s not correlated with more time spent on app. This leads me to think that there weren’t many people of color who marched in the Pride parade, when in fact there was representation proportionate to the racial makeup of the city. Like the Model, we begin to think with simple correlations without asking ourselves why those correlations exist in the first place.
We internalize the connections shown to us as an accurate portrayal of reality because it agrees with our preexisting ideologies.
We then verbalize and act out our algorithmically stunted worldviews towards each other, furthering our engagement with the apps because the very way that we verbalize and act is optimized towards spending time on these apps.
Since user behavior is itself an input in to the Model, it creates a vicious cycle that ultimately transforms us in to the ultimate cash cows.
SO, if the original Model’s goal is to maximize page clicks and time spent on the app, and the relationships presented by the Model reflect this goal, then our thought processes and understanding of the world are changed so that we click on more ads, and spend more time on social media.
Conclusion
“Social media is dumbing us down” has been stated before in various forms, but maybe not quite as explicitly as I am stating it. What I want to emphasize here is that our very way of thinking is being affected. The connections that we think we are making are actually nudged towards us (presented to us?) by the Model and already exist in the Model. This control has always been present in some form or another through newspapers or Government (through media outlets or government/public sources) but at least then it was tractable. It was a smaller network of humans that controlled the lens and could be held responsible.
Now, we have MASSIVE volumes of data that represent mind-bogglingly complex systems. The data is then passed through a fairly large and complex automated system (the Model) that may capture some of the simpler relationships in the system. However, it certainly does not capture all of the nuanced historical, geographic, political (etc.) relationships.
My Takeaway
Next time you are scrolling your newsfeed and are served a politically polarizing ad or some political content, ask yourself “how is this consistent with getting me to spend more time here? By showing me this ad, what simple causal relationships and world views are trying to be established, and how does me adopting the world views being presented benefit the people who are showing me this ad?
Questions/comments/concerns: tweet at me! @ben_dykstra