Blog Post:

Our project is focused on combating the spread of misinformation on YouTube. Branching from a broader project, this project will specifically focus on the addition of credibility signals to YouTube’s search results pages. The goal behind this is to increase user agency in determining the reputability of a video.

An example of how current signals can affect a user’s choice is the number of views a video has. They might choose to watch the more popular video, seeing the number of views as a sign that the video is more trustworthy. However, on a social media platform like YouTube, the virality of the video can have almost no connection to the credibility of its content. This can lead to a user believing and/or sharing misinformation inadvertently.

The addition of credibility signals amongst the other search result data can provide greater insight to a user as they choose what to watch. While this project will be useful to a broad audience of YouTube users, the main target audience we are choosing to focus on is YouTube users who inadvertently watch and share misinformation.

To figure out what credibility signals would most benefit this audience, we chose to employ a contextual-inquiry based approach. The broader project this work stems from a recently run study covering relevant data to this work. To further the specific information on credibility signals, we ran further interviews focusing on relevant tasks. These interviews were conducted over Zoom, and each participant consented to the interview being recorded. Participants were also informed that they would be able to stop the interview at any time if needed. For this project, three relevant tasks were used to gather more insight on credibility signals.

First, the participants were asked to search for three prescribed search terms on YouTube. For each term, participants were asked to choose a video they would like to watch from the results. While choosing, we asked them to think aloud and explain why they made the choice they did. This exercise gives us insight into what credibility signals are currently making the biggest impact with users. The three search terms used during these interviews were “Should I go vegan”, “Climate change 2050”, and “Different COVID vaccines”, in this order. The terms were chosen to cover a range of topics varying from personal and inconsequential to educational and impactful. By choosing these three different levels, we wanted to determine how the topic of the search could impact how a user interacts with credibility signals and if it would change.

The second exercise was to compare the YouTube search results page to the Google search results page. The search term “Different COVID vaccines” was used for this exercise as well. We asked each participant to compare and contrast the differences they saw between the two pages, and which they preferred overall. The purpose of this activity was to get participants to organically discuss the presence of credibility signals in factors like ranking, available information, additional context, etc, without priming them to insights we were possibly looking for. Google was chosen as the secondary comparison, as they employ significantly more additional information related to the search, have a different ranking algorithm to YouTube, and is a popular platform that the majority of users are familiar with.

The final exercise we asked users to do was to evaluate which video they would find more trustworthy by looking at the playback pages for two videos (without playing the video). Even though this project is focusing on the search results interface, focusing on the playback page helps focus a participant on the information available for each video. The goal of this exercise was to gain more insight about what information about each video could be used in future credibility signals. In summary, using the AEIOU framework:

The activities included analyzing search results, comparing YouTube and Google search results, and evaluating a video playback page

The environment for the activities was a structured zoom call, using the participant’s personal browser choices to conduct the activities.

The interactions were between both the interviewer and the participant, as well as the participant with YouTube and Google

The objects in the study were the zoom call, the participant’s browser,

the participant’s device from which they conducted the interview, the YouTube webpage, and the Google webpage

The users in this study were the interviewees, who were mostly college students

From the further interviews conducted, we were able to expand on the findings of previous research. Some of the main takeaways from the study are as follows.

Users put a lot of trust in the source of the content, i.e. the channel name or content creator. Relying on past interactions with certain sources, they would use this information to then move forward with their decision. For example, some users had the impression that a source like Buzzfeed was more clickbait oriented, so they were less likely to trust the information in that video to one from a source they deemed more credible, like the CDC. Interestingly, the information that users leveraged about sources were generally from previous knowledge and impressions of a source. Moving forward, we would like to find a way that can indicate the credibility of a source for users who do not have previous knowledge of the name.

From the comparison between YouTube and Google, we learned that there are mixed opinions on the availability of summary information in what Google refers to as ‘knowledge panels’. While some users found the information helpful in making their choice, others found the additional information starts to overwhelm their ability to find what they were looking for. Overall, the participants did tend to show an implicit trust in the information shown in these panels as it appeared more integrated into the body of the page.

Another interesting insight was how participants interacted with the content previews/descriptions. Participants would search out more information about who the creator of the content was or where the information in the video came from. The way the text was presented also made an impact, as participants found the production quality of the text directly impacted how much they would trust the video. They would also look for more detailed information on the video itself, such as looking for recognizable indicators of affiliation with institutions they knew.

A final insight we would like to share relates to the ranking of search results. Participants were more likely to choose one of the first few sources that appeared, indicating a level of trust in the platform’s ranking decisions. It will be interesting to determine how we can use this information in our addition of credibility signals, and if there is a way we can make users re-evaluate their implicit trust in a ranking algorithm.

We are excited to continue the work on this project, especially as we have so much useful information to help us progress into the design and implementation process.


<
Blog Archive
Archive of all previous blog posts
>
Next Post
G3 Low-Fidelity Prototype