An exploration of the strengths and weaknesses of using text messaging as a tool for self-report data collection in psychological research 


Technological developments have expanded the methodological repertoire available to self-report psychological researchers. There is considerable scope for new modes of data collection to supplement, or in some cases replace, the older standards of postal or online survey. Such methodological innovation is particularly required to facilitate research limited by the sheer difficulty of data collection, such as Ecological Momentary Assessment. Mobile telephones are a clear candidate, as they are ubiquitous, portable, accessible and convenient for both researchers and participants. Mobile phones support three methods of communication that may be used to collect self-report data: voice call, app, and SMS. For reasons of scalability and compatibility with all mobile phones, this dissertation focussed on SMS. This is the first systematic investigation of the strengths and weaknesses of using SMS as a tool for self-report psychological research.

I'm hoping sharing its content here will help form a foundation for future researchers considering whether or not to use SMS as a tool for data collection in their upcoming research, and for further methodologically-focussed investigation of this relatively new research mode.

Remember, research methods are important. Be thoughtful. Be critical.

Also take pity on the honours and PhD students in need of participants to learn about this kind of thing; and the established scientists seeking to better understand the world, and participate in their research.

But, I digress. Here's a wrap-up.

How is SMS being used for research?

Following in the tradition of methodologically-focussed meta-analyses and structured reviews in the psychological literature (e.g. Cook, Heath, & Thompson, 2000; Fox, Crask, & Kim, 1988; Shih, 2008), this dissertation began by asking How is SMS being used for research? The intent for doing so was to give context to the ensuing investigation, and highlight knowledge gaps. The wide range of uses for SMS in everyday life was reflected in the answer to this question:  SMS is being used by researchers for a wide range of purposes, but rarely as a tool for self-report data collection. This meta-analysis also provided relevant information from the use of SMS in other disciplines, for purposes other than self-report. Specifically, the recurrent use of SMS as a reminder (for things such as appointment attendance) suggests that SMS may perform particularly well as a reminder prompt in a self-report research context.  Interestingly, the number of studies using SMS for self-report data collection is increasing. This small but growing literature supports the assertion that SMS can be successfully applied as a tool for data collection, and that it is timely to investigate its properties as a research mode in order to guide its future use.  

Are people able, ready and willing to become research participants using SMS?

Having established that there is interest among researchers in SMS as a tool for data collection, the next question was Are people able, ready and willing to become research participants using SMS? The intention was to establish whether there were technological or attitudinal barriers to the use of SMS in a self-report research context, and if so, how they may be surmounted. The answer was that people are generally able, ready, and willing, but this does not necessarily mean that they will participate in self-report research via SMS. This dissertation uncovered a decided gap between stated willingness to participate in research via SMS, and actual participation. This ‘intention-behaviour gap’ has been found in many areas, from weight loss (Sniehotta, Scholz & Schwarzer, 2005) to ethical consumer choices (Carrington, Neville & Whitwell, 2014). As discussed in Sheeran’s (2002) meta-analysis of meta-analyses regarding the intention-behaviour gap, there is a low correspondence between intention and behaviour; less than a third of the variance in a wide array of behaviours can be explained by intention. This problem of stated willingness to participate not translating into participation has been discussed in the context of volunteering for research purposes (e.g. Poole, 2012), but not specifically for SMS. The transition from intention to responding may be particularly low for responding to researchers using SMS, because it does not require a participant to go out of their way to respond; they may use the device typically kept nearby in everyday life, rather than having to carry or keep track of a paper diary or digital device, or having to sit at a computer. Future researchers collecting self-report data via SMS could help investigate the behaviour intention gap in SMS participation by exploring potential demographic differences between respondents and non-respondents. If possible, seeking self-reported reasons for response, or non-response, may also be helpful.

How should a researcher design an SMS self-report study?

If you read them in order, by the third section, a nuanced picture was forming. Or perhaps you just jumped right in to where the picture was formed. But like a comic book movie, you'd get more out of it if you knew about what'd gone before. Anyway, SMS had successfully been used to collect self-report data in the past, and participants were generally comfortable with the idea of using it to provide self-report data. But, some drawbacks were becoming clear, such as difficulty with participant recruitment. The third question was How should a researcher design an SMS self-report study? First, this was examined in terms of recruitment, response rates and response behaviour, within the bounds of question length that the literature suggested should be feasible. Then, the limits of SMS were pushed in terms of sampling frequency, and amount of information collected. The answer to the question was was, By extensive pilot-testing with a focus on the following methodological issues: minimisation of the time lag between recruitment and participation, the sampling frequency, the sampling tempo (hourly, daily, weekly etc.), the number of questions, and the length of meaningful responses. Despite the attractiveness of a simple checklist of optimal conditions for using SMS to support research (e.g. “SMS should not ask more than X questions at a time”, or “Researchers should expect a Y minute response delay”), the interdependency of the discussed methodological decisions precludes a simple set of guidelines. Rather, these results provide general principles (e.g. minimising delay between recruitment and active participation, keeping the measure short, and making use of the capacity for frequent sampling), and highlight the importance of deliberate consideration of how a methodology may be used in specific research contexts.

How does SMS compare with other tools for data collection?

By the fourth section, it seemed clear that SMS could be used for self-report data collection, but just because you can do something, does not mean that you should. With online and paper surveys, self-report research in general already has a well-established methodological repertoire. Although Ecological Momentary Assessment could certainly benefit from a new data collection method, SMS is not necessarily the best option for data collection. The final question was therefore How does SMS compare with other tools for data collection? Anticlimactically, the answer was SMS can compare favourably, or unfavourably to other modes depending on you want from a data collection tool. The findings suggest that if a researcher wants a data collection tool that provides timely responses and high response rates, and is sampling once or over the course of a day, SMS holds its own with or even outperforms other modes like apps, email, postal and online surveys. But, if a researcher is more interested in response completeness than response rates, or is sampling over a longer time period, SMS compares poorly to other modes on almost every metric.

Future directions and limitations (Or: over to you lot, and how I goofed)

My dissertation consisted of fifteen separate papers that each examined SMS as a tool for data collection in a different way. The choice to cast a wide net, rather than focus on a single narrow issue, developed as the literature revealed an extensive range of ways SMS could be used to support research.  The particular factors investigated were carefully chosen based on what the early chapters indicated may be important (i.e. how to minimise participant drop-out following recruitment), choices relevant to experience sampling methods (i.e. sampling timing and frequency), and choices specific to the most salient properties of SMS, such as the strict character limit (i.e. questionnaire and response length). This broad approach has allowed advancement in understanding a number of the methodological properties of SMS, but also leaves considerable scope for more detailed investigation of each of the factors discussed. Off you go, yes, you, keep prodding at this topic. I'm pretty SMSsed out at this point.

Another issue that bears discussion is the impact of the explicit methodological focus of the research on the ecological validity of many of the studies presented here. With the exception of Delay between recruitment and participation impacts on pre-inclusion attrition, the information provided to participants in studies involving data collection via SMS clearly stated the duel theoretical and methodological aims of each study. Participants were informed about both the theoretical and methodological hypotheses of the research. This may threaten the generalisability of these findings beyond methodologically focussed research due to social desirability effects. Knowing that SMS is the focus of the researcher, participants may have inflated the effort put in to responding via SMS in a misguided attempt to please the researcher. This could have resulted in systematic overestimation of the data quality that SMS provides across several studies.

At this point, it is important to draw a distinction between the potential effect of sending SMS in the context of research in general, and the effect of sending SMS in the context of methodologically-focused research. It is not at all a problem if the use of SMS in a research context in encourages more detailed and comprehensive responses in comparison to everyday usage (as may have been the case in Short and Sweet? Length and informativeness content of open-ended responses using SMS as a research mode). This may be a rare example of a social desirability bias that benefits the researcher, and would be present across all studies using SMS to collect data. But, it is a problem if the methodological focus of research causes participants to respond in ways that they would not in research that happens to use SMS. This could be investigated further by conducting further data collection via SMS, and manipulating whether or no participants are told the research has a methodological focus. Though realistically, it's more likely to come out in the wash when people start using SMS more earnestly in the course of conducting self-report research.

Returning to the studies, the potential for this to bias results was mitigated somewhat by an emphasis that the studies within this dissertation are exploratory research, and that the researcher was not at all invested in whether SMS was a particularly good or bad tool for data collection. The aim was always phrased around ‘investigating the properties of SMS’ rather than ‘proving the strengths of SMS’. Another approach could have been the use of a cover story or deception to hide the methodological elements of the research from participants. But this proved difficult in in pilot testing, where participants were invariably curious about why they were being asked to respond in SMS, as the bulk of them (notably the university student populations) only had experience with paper or online surveys. When it came time to seek ethics approval for studies involving data collected via SMS, keeping the rationale behind using SMS from participants would have constituted deception.


The ANU Human Research Committee (They Who Must Be Obeyed, Despite Objections We Grudgingly Accept They Were Probably Right About That One Annoying Point Dammit) pointed out that such use of deception may have threatened recruitment (as failure to justify methodology could give the impression of poorly constructed research), and were unconvinced that preserving ecological validity was sufficient grounds for deception. This view was supported, particularly when investigating the perceived legitimacy of SMS as a psychological research mode, which found that the perceived appropriateness of a research mode, particularly in the context of other modes that could be used, is an important factor in intention to participate. Because SMS ranks poorly against the legitimacy of other research modes, this supports the need to be upfront and justify the use of SMS when recruiting participants. Due to the lack of pre-existing literature to justify the use of SMS to participants, the only cogent rationale is the actual purpose of the research, the investigation of its performance as a research mode.

One area this dissertation did not discuss was the performance of SMS when used for event-contingent sampling, where participants respond whenever the thought, feeling, or behaviour of interest occurs (Christensen, 2003). This is different from signal-contingent sampling, where participants respond to a prompt sent by the researcher. With different strengths and weaknesses, both are widely used in experience sampling literature (Reis & Gable, 2000). The frequency of ‘events’ reported in event-contingent sampling can be an informative indicator of a participant’s experience (e.g. in Ebner-Priemer, Eid, Kleindienst, Stabenow, & Trull, 2009). Alternately, signal-contingent sampling allows the researcher to set a known response schedule rather than relying on likely changeable event-related response schedules. Only signal-contingent sampling was explored in this dissertation, as having a known response schedule is vital for investigating deviations from the desired schedule (e.g. response delays, number of missing responses), and for exploring issues of sampling frequency (e.g. feasibility of sampling a certain number of times in a given day). This choice helped to uncover the general methodological properties of SMS for collecting self-report data, but neglected the properties of SMS when used in an event-contingent design. Given that SMS prompts can substantially improve response behaviour (outlined in chapters 4 and 5), it is possible that SMS may perform poorly in an event-contingent context where prompts are not sent. Alternatively, SMS may perform very well for event-contingent sampling as participants are likely to keep their mobile telephone nearby, as opposed to problems with forgetting or misplacing paper diaries. Future research would do well to specifically investigate the properties of SMS as a tool for event-contingent self-report research. You there, person reading this, you should totally look into that.

Another avenue future research could pursue is the application of SMS to collect data from rural or remote areas. This dissertation focussed on an urban population, sampling primarily in Canberra, the capital city of Australia. Mobile telephones are increasingly providing communication lines to rural areas which were previously unreachable due to the difficulty in installing physical telecommunication infrastructure (Lacohée, Wakeford, & Pearson, 2003). Accordingly, there is particular interest in using SMS for communicating with people living in rural areas (e.g. Githinji et al., 2014; Kamanga, Moono, Stresman, Mharakurwa, & Shiff, 2010; Lori, Munro, Boyd, & Andreatta, 2012). As the technological and social context often differs between city and rural areas (Hindman, 2000), it would be informative to establish whether the properties of SMS as a tool for research in a city sample can guide expectations of its performance with a rural sample. Road trip?

Differences in everyday SMS usage and language may limit the generalisability of the current research to other countries. Countries differ in terms of in SMS history, availability, usage, culture, and language (Busse, & Fuchs, 2012; Latham, 2007). These differences may be important, because as several papers in Chapter 3 demonstrated, behaviours, perceptions and attitudes toward SMS in everyday life are associated with how people view SMS in a research context. For example, in Germany, SMS usage is accompanied by a strong culture of reciprocity – if someone receives an SMS, they are highly likely to respond (Hoflich & Rossler, 2002). In a research context, this may translate into particularly high response rates when communicating with German participants via SMS. In China, SMS has become an important tool for confidential political communication between citizens, because it is less susceptible to strict government monitoring and censorship than other text-based communication methods (Yan, 2003; Yu, 2004). In a research context, this perception of security and privacy may make SMS a particularly viable tool for collecting self-report data on sensitive topics in China. The physical properties of the written language may impact on how much information can be conveyed in an SMS (Carrier & Benitez, 2010). Accordingly, expectations about the feasible length of questions and response may need to be modified – in comparison to an English sentence, the German equivalent would require more characters, and the Chinese equivalent far fewer. There is considerable scope for future research to examine whether considerations such as these impact on how SMS may be used to collect self-report data in different countries.

Conclusion of the conclusion (most conclusive)

Together, the fifteen papers that made up my dissertation uncovered some useful guidelines for future research which may use SMS. In summary, the strengths of SMS as a tool for self-report psychological research included growing interest in research community; positive perceptions of SMS as a research tool amongst potential samples; swift responses and high response rate; suitability for frequent repeated sampling; and usefulness as a reminder prompt to support other modes of data collection. Weaknesses included a large difference between stated willingness to participate and actual participation; response incompleteness; unsuitability for infrequent sampling; and problems with psychometric equivalence in relation to other research modes like online or paper surveys.

SMS constitutes a largely unused, but potentially powerful, tool for self-report data collection. The research within this dissertation paints a picture of a research mode that participants are able and willing to use, and that is particularly well suited to short bouts of high-frequency sampling. Against the backdrop of other research modes available for self-report data collection, SMS provides comparatively high response rates, but also comparatively low response completeness. Nonetheless, SMS is capable of providing robust and valid information at sampling frequencies that other modes, such as online or paper surveys, may struggle to match. The bidirectional nature of communication between researcher and participant opens up avenues for ongoing data monitoring and question clarification impossible in other modes, such as paper surveys (which are generally fixed once printed). Perhaps most importantly, SMS is ideally suited to bolstering a vitally important, but difficult form of research: Ecological Momentary Assessment.



Busse, B., & Fuchs, M. (2012). The components of landline telephone survey coverage bias. The relative importance of no-phone and mobile-only populations. Quality & Quantity, 46(4), 1209-1225.

Carrier, L. M., & Benitez, S. Y. (2010). The effect of bilingualism on communication efficiency in text messages (SMS). Multilingua - Journal of Cross-Cultural and Interlanguage Communication, 29(2), 167–183.

Carrington, M. J., Neville, B. A., & Whitwell, G. J. (2014). Lost in translation: Exploring the ethical consumer intention–behavior gap. Journal of Business Research, 67(1), 2759-2767

Christensen, T. C., Barrett, L. F., Bliss-Moreau, E., Lebo, K., & Kaschub, C. (2003). A practical guide to experience-sampling procedures. Journal of Happiness Studies, 4(1), 53-78.

Cook, C., Heath, F., & Thompson, R. L. (2000). A Meta-Analysis of Response Rates in Web- or Internet-Based Surveys. Educational and Psychological Measurement, 60(6), 821–836.

Ebner-Priemer, U. W., Eid, M., Kleindienst, N., Stabenow, S., & Trull, T. J. (2009). Analytic strategies for understanding affective (in) stability and other dynamic processes in psychopathology. Journal of abnormal psychology, 118(1), 195.

Fox, R., Crask, M., & Kim, J. (1988). Mail survey response rate a meta-analysis of selected techniques for inducing response. Public Opinion Quarterly, 467–491. Retrieved from

Githinji, S., Kigen, S., Memusi, D., Nyandigisi, A., Wamari, A., Muturi, A., … Zurovac, D. (2014). Using mobile phone text messaging for malaria surveillance in rural Kenya. Malaria Journal, 13(1), 107.

Hindman, D. B. (2000). The rural-urban digital divide. Journalism & Mass Communication Quarterly, 77(3), 549-560.

Hoflich, J. R., & Rossler, P. (2002). More than just a telephone: The mobile phone and use of the short message service (SMS) by German adolescents, results of a pilot study. In E. A. Villar (Ed.), Revista de Estudios de Juventud (pp. 61–64).

Kamanga, A., Moono, P., Stresman, G., Mharakurwa, S., & Shiff, C. (2010). Rural health centres, communities and malaria case detection in Zambia using mobile telephones: a means to detect potential reservoirs of infection in unstable transmission conditions. Malaria Journal, 9, 96.

Lacohée, H., Wakeford, N., & Pearson, I. (2003). A social history of the mobile telephone with a view of its future. BT Technology Journal, 21(3), 203-211.

Latham, K. (2007). SMS, Communication, And Citizenship In China’s Information Society. Critical Asian Studies, 39(2), 295–314.

Lori, J. R., Munro, M. L., Boyd, C. J., & Andreatta, P. (2012). Cell phones to collect pregnancy data from remote areas in Liberia. Journal of Nursing Scholarship : An Official Publication of Sigma Theta Tau International Honor Society of Nursing / Sigma Theta Tau, 44(3), 294–301.

Poole, G. (2012). Using psychological principles to narrow the intention-behavior gap and increase participation in HIV vaccine trials. Current HIV research, 10(6), 552-556.

Reis, H. T., & Gable, S. L. (2000). Event-sampling and other methods for studying everyday experience. Handbook of research methods in social and personality psychology, 190-222.

Sheeran, P. (2002). Intention—behavior relations: A conceptual and empirical review. European review of social psychology, 12(1), 1-36.

Shih, T.H. (2008). Comparing Response Rates from Web and Mail Surveys: A Meta-Analysis. Field Methods, 20(3), 249–271.

Sniehotta, F. F., Scholz, U., & Schwarzer, R. (2005). Bridging the intention–behaviour gap: Planning, self-efficacy, and action control in the adoption and maintenance of physical exercise. Psychology & Health, 20(2), 143-160.

Yan, X. (2003). Mobile Data Communications in China. Communications of the ACM, 46(12), 81 – 85.

Yu, H. (2004). The power of thumbs: the politics of SMS in urban china. Graduate Journal of Asia-Pacific Studies, 2(2), 30–43.