Social Media and Healthcare
816.5K views | +0 today
Follow
Social Media and Healthcare
Articles and Discussions on the  intersection of Social Media and Healthcare. Relevant to Healthcare Practitioners, Pharma', Insurance, Clinicians, Labs, Health IT Vendors, Health Marketeers, Health Policy Makers, Hospital Administrators.
Curated by nrip
Your new post is loading...
Your new post is loading...
Scooped by Plus91
Scoop.it!

Incorporating Social Media into Medical Education

Slides from Social Media workshop for medical educators at Academic Internal Medicine Week 2010. Presenters represent 3 different universities and different rol
No comment yet.
Scooped by Plus91
Scoop.it!

What if Dr House used Twitter?

Bertalan Meskò: social media health Bertalan (Berci) is a geek. He is a medical futurist who started out being a project leader of 'personalised medicine thr...
No comment yet.
Scooped by Plus91
Scoop.it!

Social Media Release Increases Dissemination of Original Articles in the Clinical Pain Sciences

Social Media Release Increases Dissemination of Original Articles in the Clinical Pain Sciences | Social Media and Healthcare | Scoop.it

A barrier to dissemination of research is that it depends on the end-user searching for or ‘pulling’ relevant knowledge from the literature base. Social media instead ‘pushes’ relevant knowledge straight to the end-user, via blogs and sites such as Facebook and Twitter. That social media is very effective at improving dissemination seems well accepted, but, remarkably, there is no evidence to support this claim. We aimed to quantify the impact of social media release on views and downloads of articles in the clinical pain sciences.

 

Sixteen PLOS ONE articles were blogged and released via Facebook, Twitter, LinkedIn and ResearchBlogging.org on one of two randomly selected dates. The other date served as a control. The primary outcomes were the rate of HTML views and PDF downloads of the article, over a seven-day period. The critical result was an increase in both outcome variables in the week after the blog post and social media release. The mean ± SD rate of HTML views in the week after the social media release was 18±18 per day, whereas the rate during the other three weeks was no more than 6±3 per day. The mean ± SD rate of PDF downloads in the week after the social media release was 4±4 per day, whereas the rate during the other three weeks was less than 1±1 per day (p<0.05 for all comparisons).

 

However, none of the recognized measures of social media reach, engagement or virality related to either outcome variable, nor to citation count one year later (p>0.3 for all). We conclude that social media release of a research article in the clinical pain sciences increases the number of people who view or download that article, but conventional social media metrics are unrelated to the effect.


We hypothesised that social media release of an original research article in the clinical pain sciences increases viewing and downloads of the article. The results support our hypothesis. In the week after the social media release, there were about 12 extra views of the HTML of the research article per day, and 3 extra downloads of the article itself per day, that we can attribute to the social media release. The effects were variable between articles, showing that multiple factors mediate the effect of a social media release on our chosen outcome variables. Although the absolute magnitude of the effect might be considered small (about 0.01% of people we reached were sufficiently interested to download the PDF), the effect size of the intervention was large (Cohen’s d >0.9 for both outcomes). The effect of social media release was probably smaller for our site, which is small, young and specialised, than it would be for sites with greater gravitas, for example NEJM or BMJ or indeed, PLOS.

Relationship between Reach and Impact

The idea of social media reach is fairly straightforward - it can be considered as the number of people in a network, for example the number of Facebook friends or Twitter followers. A blog may have 2,000 Facebook ‘likes’, 700 Twitter followers and 300 subscribers - a reach of three thousand people. Impact is less straightforward. The various definitions of social media each reflects a substantially larger population than our most proximal measure of impact – HTML views and PDF downloads of the original article. One might suggest that impact should reflect some sense of engagement with the material, for example the number of people within a network who make a comment on a post. From a clinical pain sciences perspective, change in clinical practice or clinician knowledge would be clear signs of impact, but such metrics are very difficult to obtain. Perhaps this is part of the reason that researchers are using, we believe erroneously, social media reach as a measure of social media impact.

 

There are now several social media options that researchers integrate into their overall ‘impact strategy’, for example listing their research on open non-subscription sites such as Mendeley, and joining discussions about research on social media sites such as Twitter and on blogs. Certainly, current measures of dissemination, most notably citations of articles or the impact factor of the journals in which they are published, do not take into account the social media impact of the article. New measurements, such as altmetrics and article-level metrics such as those provided by PLOS, aim to take into account the views, citations, social network conversations, blog posts and media coverage in an attempt to analyse the influence of research across a global community. There is merit in this pursuit, but, although our study relates to clinical pain sciences research, our results strongly suggest that we need to be careful in equating such measures with impact or influence, or using them as a surrogate for dissemination. Indeed, not even virality, which estimates the propensity of an item to ‘go viral’, was related with HTML views or PDF downloads.

 

This is very important because our results actually suggest that we may be measuring the wrong thing when it comes to determining the social media impact of research. That is, we showed a very clear effect of the social media release on both HTML views and PDF downloads of the target article. However, we did not detect any relationship between either outcome and the social media metrics we used. The only variable that related to either outcome was the number of HTML views, of the original blog post, in the week after social media release. It seems clear then, that it is not the total number of people you tell about your study, nor the number of people they tell, nor the number of people who follow you or who re-tweet your tweets. In fact, it appears that we are missing more of how the release improves dissemination than we are capturing.

 

The final result, that citation count did not relate to any social media measures, casts doubt over the intuitively sensible idea that social media impact reflects future citation-related impact. We used the Scopus citation count, taken almost 9 months after the completion of the experimental period, and 1–2 years after the publication date of the target articles, as a conventional measure of impact. There was no relationship between citation count and our measures of social media reach or virality. One must be cautious when interpreting this result because citation count so soon (1–2 years) after publication might be unlikely to capture new research that was triggered by the original article – although, importantly, journal impact factors are calculated on the basis of citations in the two years after publication. Suffice here to observe that the apparent popularity of an article on social media does not necessarily predict its short-term citation count.

 

Although this is the first empirical evaluation of social media impact in the clinical pain sciences and we have employed a conservative and robust design, we acknowledge several limitations. Social media dissemination in the clinical sciences relies on clinicians having access to, and using, social media. It will have no effect for those who do not use the web and who rely on more traditional means of dissemination - ‘pulling’ the evidence. Although there was an increase in HTML views and PDF downloads as a result of social media dissemination, we do not know if people read the article or whether it changed their practice. We presumed that a portion of those who viewed the HTML version of the article would then go onto download it, however our data suggest that a different pattern of access is occurring. Unfortunately, our data do not allow us to determine whether the same people both viewed the HTML and downloaded the article PDF or whether different people viewed the HTML and downloaded the article PDF. Downloading a PDF version of a paper does not necessarily imply that they would later read it, but it does increase the probability of such.

 

Citations and impact factors measure the impact within the scientific community whereas views by social media will also include interested clinicians and laypeople and, as such, measure uptake by different audiences. Although we used a variety of different social media platforms to disseminate to as wide an audience as possible, we do not know who the audience is - we can only surmise that they are a mixture of researchers, clinicians, people in pain and interested laypeople. Further, each social media strategy comes with inherent limitations in regards to data collection of usage statistics related to a blog post. Gathering Facebook and Twitter statistics for each article is still cumbersome and is probably not always accurate. The risk in using search engines to gather data is that there is no way of knowing whether all the data have been identified. For Twitter there is no way to retrospectively calculate the number of re-tweets accurately over a longer period retrospectively for each post.

 

As a result, our Twitter data is a best estimate and my have underestimated the true values but, critically, we would expect this effect to be unrelated to our blog post and therefore not impact on our findings. Regarding Facebook, shares, likes and comments are grouped as one statistic but in reality only shares and comments show engagement with the post and indicate that people are more likely to have read it. Regarding LinkedIn, the only available data was the number of members of the BodyInMind group and as such, we have no way of knowing how many viewed the actual blog post.

 

The blog, BodyInMind.org, through which the original blog posts of PLoS ONE articles were released, experienced a technical interruption half-way through the experiment. In spite of an attempt by PLOS to retrieve the statistics, approximately five days of data were lost on several of the blog posts. This also meant that additional data on traffic, such as percentage of traffic for each blog post from external sources such as Facebook, Twitter, LinkedIn and ResearchBlogging could not be measured during this period. Critically and fortuitously, this period did not coincide with data collection weeks. PLOS indicated that this technical problem has now been fixed, but similar problems may arise in the future and present an ongoing risk to studies such as ours. Although disconcerting for those keenly following social media data, this problem would be very unlikely to have affected our primary outcomes because none of our dates fell within the period that was affected.

 

Social influence can produce an effect whereby something that is popular becomes more popular and something that is unpopular becomes even less popular. It seems possible that articles on BodyInMind.org were shared because the site is popular among a discrete community and not because the article itself merited circulation. This possibility does not confound our main result but it adds a possible argument to the common objective of making a blog more popular as a device to boost social media impact of individual posts. Finally, our study relied on the target articles being freely available to the public. Many journals are not open access, particularly those in the clinical pain sciences. Therefore, we must be cautious extrapolating our results to subscription only access journals.

 

In conclusion, our results clearly support the hypothesis that social media can increase the number of people who view or download an original research article in the clinical pain sciences. However, the size of the effect is not related to conventional social media metrics, such as reach, engagement and virality. Our results highlight the difference between social media reach and social media impact and suggest that the latter is not a simple function of the former.


No comment yet.