body image on social media: trade-offs and responsibility
Perception is in the eye of the beholder, so what do we do?
we see women,
extraordinary women
spend their day
thinking
worrying about
the kilos her body carries
whether he will like her in that dress
or if he prefers jeans
- an old poem of mine
PS: see also Content Moderation: b*obs, or no b*obs?
One of my first projects at TikTok was reviewing their policies on negative body image and weight loss for user-generated and monetized content. As with everything, there is more nuance to account for than I expected. This post will look at body image on social media, policy trade-offs and reflect on what kind of responsibility platforms can take to address harmful body ideals. I will also go through a few examples of how something bad can be good and how something positive can be seen as negative. I have also included a few questions that might help spark a conversation around what the problem actually boils down to.
Disclaimer: the policy scenarios listed here are random examples I thought of as I was writing this post; they are not reflecting any specific policy I worked on at TikTok or any other company. If you want information about TikTok’s policies, check their website.
1. Body image on social media
It's a well-known fact at this point that social media does not positively influence body image and ideals. Social media platforms make us lose track of what is real and not; edited photos, surgeries, unhealthy eating patterns and obsessions with how people look, beauty trends, the constant exposure to alternative, 'better' lives, and more beauty. Look amazing, and life will be perfect!!!! Woho!
It's a lot to handle for anyone, especially for younger, more vulnerable audiences. We make assumptions about what a person is eating based on a 20-second video that we know by design is staged, and while we are aware of photoshop, we struggle to see which images are fake and not. I don't think we care deep down if it's real or not because the people in the photo just look so good. Knowing something is fake is not the same as feeling it and understanding it. Even if you know that one specific photo has been edited, I don’t think people understand how that photo and millions of other small signals, offline and online, contribute to our general view of beauty. Our personalities and views are made of generations of distorted body ideals and the commercialization of it - they are embedded in us now.
Platforms have done some work from a policy perspective in this area. For instance, my former colleague Kate England led the efforts to launch TikTok's advertising policies which prohibited harmful weight loss products on the platform, the first of its kind in big tech. In other words, TikTok decided to block revenue for the sake of being responsible, and by doing so, they told the world that they do not support these kinds of products. Shortly after, Pinterest joined in, and Google enabled users to restrict weight loss content.
Now, ads are easier to manage because they can be more strict on what type of clients they wish to onboard and what products and services align with their values. However, this is much harder to manage on the organic side because of ...drumroll… free speech, nuances, subjectivity, etc. The negative body image conversation boils down to two exciting topics: trade-offs and responsibility.
2. Trade-offs
All bad things can also be seen as good. Think of any bad thing in society, and you will find something good about it, even if it sounds absurd. Sometimes it's easy to conclude something is bad because the negative aspects clearly outweigh the positives. In the content policy world, this is not always the case. The most harmful content would be blocked. Most things fall under 'grey area', and creating content policies is all about trade-offs. You have to adjust risks and ask yourself, which option is the least problematic one? Which one has the smallest number of risks and negative aspects? What happens if we roll this out at scale?
Let's look at a few examples of areas that can be both good and bad. Let’s imagine you are watching videos on Netflix showing weapons, beauty routines, and diets. What could good and bad look like?
Disclaimer: the policy scenarios listed here are random examples I thought of as I was writing this post; they are not reflecting any specific policy I worked on at TikTok or any other company. If you want information about TikTok’s policies, check their website.
🔫 Weapons
Bad, if:
Displaying graphic and violent content
Promoting the use of guns for minors
Encouraging or inciting violence
etc, etc.
Good, if:
Raising awareness about violence and using guns as a visual tool to express this idea
Raising awareness about an organization working against gun violence
💅🏻 Beauty/Make-up Videos
Bad, if:
Encouraging unhealthy ideals
Making users feel as if they need make-up
Encouraging surgery
Good, if:
Expressed as a creative expression, like art or a hobby
If presented in a positive and empowered way
🥗 Diet videos
Bad, if:
Encouraging or promoting eating disorders or overly strict diets
Making users feel as if they need to lose weight
Good, if:
Promoting a healthy lifestyle (can also be harmful, depending on the words they are using)
Encouraging a varied diet and feeling good
Showing diet videos for people who struggle with weight and need to lose it for health purposes
Showing how people can look different even though they weigh the same
Now,
Platforms can't shield users from all kinds of content. For example, they can't know if a user watching a video about sugar-free diets will be negatively impacted and feel as if they need to lose weight orrrr if this user feels empowered to live a healthy lifestyle. Platforms can allow content promoting a sugar-free diet because sugar is bad for us, so this would be a positive message for the users (=healthy). Still, they can also say it should be removed because it can spark negative thinking around body image (=unhealthy).
Both statements are true, so which one should they pick?
It's impossible to account for how the content will be perceived because it depends on the user and their specific relationship with food, beauty and health. I do not deny the fact that platforms amplify problematic body ideals. They are. Nor am I denying minors are more vulnerable compared to adults, this is all factually true, but I am questioning how platforms should act since it all depends on how the users handle and perceive the content. This moves me to my next topic; responsibility.
3. Responsibility
We know that many videos can accumulate over time into this general feeling and understanding of beauty. It's hard to pinpoint exactly what is harmful (unless it's very explicit, I assume). This topic is nuanced as hell and so, so complex. Most of us, especially younger audiences, are not equipped with the mental tools required to watch and consume these videos without negatively affecting us. It's difficult to say what kind of responsibility platforms should take because we don't even know the causation and effects yet; it's a complex system we are talking about. We don't know how platforms restricting content around diets and weight loss would impact individuals. Just because platforms are a key driver for these ideals doesn't mean reversing the problem by blocking content will work.
I really liked that TikTok prohibited some, not all, weight loss products because it was an excellent example of how platforms can do that for no other reason than they think it benefits the users. But should platforms do this with everything? If so, what criteria need to be fulfilled for them to act and restrict? How do we measure risk and causation? And would restricting certain content lead to positive effects in a world where people have access to everything at all times anyways?
4. Final thoughts
As a final note, I will list a few questions here that I thought about deeply when I was thinking about weight loss and body image from a policy perspective. Let's take the sugar-free diet example.
Is the problem that platforms are allowing these videos? I.e., are platforms indirectly normalizing and encouraging these ideas?
Or
Is the problem that too many vulnerable teenagers are using these platforms - would a higher age requirement solve this issue (knowing that age verification is another complicated problem)?
Or
Is the problem that users do not see enough variety of content to make sound judgments, i.e., should platforms automatically display content around the benefits of eating sugar directly after a video promoting sugar-free diets?
Or
Is the problem that we cannot equip young people with the right tools and mental health support to handle this type of content? We know a) negative body image is often correlated to other mental health issues b) young users are more vulnerable.
In summary, is it a platform problem, a product problem, a societal problem, or a user problem?
I have no idea.
Is responsibility assuming people can't think on their own feet?
I have no idea.
Or is responsibility about doing the right thing even when it doesn't benefit you, like when TikTok decided to ban advertisers promoting weight loss products?
Maybe, but not always.