Q+A: Real Help For ‘Fake Smiles’ On Instagram

Instagram_logo_2016.svg.pngThis week Instagram rolled out a tool that allows users to flag content when they think a friend might be in serious emotional trouble. The tool, which uses this peer alert system to cue human operators to message the troubled user and offer help, is intended to help prevent suicides and other self-harming behaviors. According to Nazanin Andalibi, a doctoral researcher in the College of Computing & Informatics who, along with Andrea Forte, PhD, an associate professor in the College, recently published research on how people disclose about depression and other sensitive experiences (e.g., self-harm, suicide, eating disorder) on Instagram, this tool is a natural continuation of the ad-hoc support communities that have been forming on the photo-sharing platform for years.

The study, entitled “Sensitive Self-disclosures, Responses, and Social Support on Instagram: the Case of #Depression” used visual and textual content analysis and statistical methods to look at 800 posts tagged with #depression and associated comments sampled from more than 95,000 depression-tagged photos posted by 24,920 unique users over the course of a month.

Findings suggest that many of the users hide their true feelings from their family and friends on Facebook or offline, but use Instagram as a portal into their actual sentiments because of the relative anonymity a pseudonymous Instagram account name provides, and because they need to somehow safely disclose these experiences. And, by and large, users who turn to Instagram seeking help or support, tend to receive it from the community of people who comment on their posts.

Andalibi provided some perspective on how the suicide-prevention tool will be received and could affect these communities on Instagram.

Facebook previously rolled out a tool like this, with a relatively positive response – will doing the same thing on Instagram work just as well?

We observed evidence of a sense of shared identity among Instagrammers who share potentially risky and sensitive content, and often times they mention in their posts or bio lines that these accounts are “secret” or “second accounts.” Something that is not possible on some platforms such as Facebook.

As we wrote in our study, “the existence of posts about social contact, social comparison, and seeking, as well as providing, support yield more support for the notion that Instagram serves as an ad hoc platform for emergent support groups. A big feature of support groups is the “helper therapy principle” whereby helping others people also help themselves. We speculate that this may also be the case on Instagram.”

Could the tool be viewed as a violation of privacy? Is it an appropriate way to intervene?

I think this is a question that Instagram should ask people who post or search for this content.

I do think that although in some cases Instagram’s “posts with words or tags you’re searching for often encourage behavior that can cause harm and even lead to death” statement may be true, it may be too strong of a reading in others. A few people I interviewed told me that they search for #depression or similar hashtags to find others who are like them, and also to provide support to them and to be there for them.

One question that I think needs to be answered is whether such messages make people who search or post about terms like depression and suicide feel as though what they are doing is wrong or dangerous. Could these messages make people perceive Instagram as a less-safe space to make cathartic disclosures or look for sympathetic others? These are open questions, I don’t think we know the answers.

Could this tool really help people?

Instagram has implemented this feature, and I think it is imperative for the company to get feedback from the target community and see how and if it helps.

Our findings suggest that engaging in disclosures on Instagram has the potential to improve emotional wellness.

As we write in the paper, “on the one hand, it may help people feel that others care and “get” them, and try to “cheer them up.” On the other hand, consistently posting content such as depressive feelings could become one’s “brand” and may be inadvertently reinforced by continually getting positive feedback (e.g., emotional support) for expressing it.”

It’s a delicate problem to design for disclosure and support, and I’d be interested to see what the Instagram user community’s feedback is with the new feature, given how they are already using the platform to seek and provide support.

Regardless, I think it is great that Instagram is not just banning this type of content, and is instead applying more nuance through these tools.

 

“Sensitive Self-disclosures, Responses, and Social Support on Instagram: the Case of #Depression” by Nazanin Andalibi, Dr. Andrea Forte and Pinar Ozturk, of Stevens Institute of Technology, will be presented at the Association of Computing Machinery (ACM) Conference on Computer-Supported Cooperative Work and Social Computing in February.

For media inquiries contact Britt Faulstick, assistant director of media relations, bef29@drexel.edu or 215-895-2617.

 

 

 

Tagged with: