설문에서 응답률 높이기 & 무응답 편향 평가 (AMEE Guide No. 102)

Improving response rates and evaluating nonresponse bias in surveys: AMEE Guide No. 102

 

ANDREW W. PHILLIPS1, SHALINI REDDY2 & STEVEN J. DURNING3

1Stanford University, USA, 2University of Chicago, USA, 3Uniformed Services University (USU), USA




 


도입

Introduction


 

설문은 의학교육연구에서 중심적 역할을 한다. 성공적인 설문은 표본이 전체 연구대상 그룹을 적절하게 대표할 수 있느냐에 따라 좌우된다. 다양한 충분히 넓게, 충분히 많은 응답자가 응답하지 않으면  설문의 결과는 전체 집단의 의견을 반영하지 못할 수도 있다.

Surveys are a central part of medical education research (Artino et al. 2014). Successful surveys depend on adequate representation from the entire group being studied. The survey results may not correctly reflect the group’s opinions if not enough people or a wide enough variety of people respond (Groves 2006).


따라서, 응답하지 않은 사람들의 수와 그들의 특성이 모두 설문결과의 정확성에 영향을 준다. 이러한 맥락에서 우리는 응답률비응답편향을 탐구하고자 하며, 이 두 가지는 관련되어있지만 서로 분리된 개념이다.

Thus, the number of people who do not respond and the characteristics of the people who do not respond can both impact the accuracy of survey results. It is in this context that we explore response rates and nonresponse bias, two related but separate concepts that are important for scholars who use surveys in their investigations.


비응답편향은 응답하지 않은 사람들이 응답한 사람들의 특성과 차이가 클 때 생기며, 이로 인해서 잠재응답자의 전체 그룹을 대표하지 못하게 되는 것이다.

Nonresponse bias occurs when the opinions from people who did not complete the survey are so different from those who did complete the survey that the results do not accurately represent the entire group of potential respondents (Dillman 2000a; Groves et al. 2002).


 

응답률 정의하기

Defining response rate



연구자들은 어떻게 응답률을 계산했는지를 정의해야 한다. 서로 다른 정의가 서로 다른 결과를 낳기 때문이다. 설문연구자들의 국제 커뮤니티에서는 여섯 가지의 AAPOR 정의를 지지한다.

It is important that researchers clearly define how they calculated the response rate since different definitions can yield different results. The international community of survey researchers has endorsed using one of the six American Association of Public Opinion Research (AAPOR) definitions of response rate (Box 1) (Groves et al. 2002; AAPOR 2011; Johnson & Wislar 2012).


이들 여섯개 정의는 다음을 어떻게 다루느냐에 따라 구분된다.

These six definitions differ by how they handle

(1) 부분 응답 partial responses (i.e. surveys with skipped questions (item non- response)) and

(2) 설문자격 여부가 불확실한 비응답자 nonrespondents whose eligibility for participation is unknown (i.e. there is a list of potential respondents but it is unknown if all of the people on the list meet study inclusion criteria).

 

AAPOR RR5와 RR6이 의학교육연구에서는 가장 잘 맞는다.

AAPOR RR5 and RR6 are usually the best-suited definitions for medical education research.


'설문자격여부가 불활실한 비응답자'에 대한 문제는 public opinion 연구에서 중요하나 의학교육에서는 덜 관련된다. 예컨대, 길가에서 '아이가 있는 사람'을 대상으로 설문을 한다고 했을 때, 설문을 받는 사람은 개개인이 부모인지 아닌지를 알지 못한다.

The issue of unknown eligibility for nonrespondents is a significant concern in public opinion research but is less commonly encountered in medical education research. For example, if someone is trying to survey people on a street corner about what it is like to have children, only parents would be eligible to participate. The surveyor does not know, however, if an individual who walks by and parent. ignores him is or is not

 

 


설문과 표집 프레임 정의

Defining survey and sampling frame


'survey'란 의견을 수집하기 위해서 활용되는 일군의 사람들에게 사용되는 도구를 말하며, 'sampling frame'이란 누가 응답대상자인지에 대한 것이다. 예컨대, 연구자는 설문지를 단일 기관의 1학년 의과대학생에게만 보낼 수도 있다. sampling frame은 그 기관 내의 1학년 학생만 해당 된다. 잘 정의된 sampling frame은 잘 정의된 응답률을 계산하는데 필수적이다.

A‘‘survey’’ refers to any tool directed at a select group of people that is used to gather opinions (AAPOR 2011; Artino et al. 2014). The ‘‘sampling frame’’ defines who the potential respondents are. For example, a researcher may send a questionnaire only to the first year medical students at a single institution. The sampling frame is thus only the first year medical students at that institution. A well-defined sampling frame is central to calculating a well-defined response rate


 

파트 1: 응답률 향상시키기

Part one: Improving response rates


 

응답 대상자는 응답을 할 수도 안할 수도 있으며, 모든 문항에 또는 일부 문항에 응답을 안할 수도 있다.

The potential respondents (sampling frame) can respond or not respond to the survey as a whole (unit nonresponse)or to just some of the survey questions (item nonresponse) (Dillman et al. 1993; Groves et al. 2002; Artino et al. 2014).


전체 비응답자

Unit nonresponse


통제가능한 요인들은 다음 중 하나이다.

Controllable factors that contribute to unit nonresponse (an individual not responding to the survey as a whole) generally fall into at least one of the following categories:

    • 설문 전달 survey delivery,

    • 설문 (필요성) 인정 survey acknowledgement, and

    • 응답자의 참여 결정 the potential respond- ent’s decision to participate (Dillman et al. 1993; Groves et al. 2002).



설문 전달

Survey delivery


예를 들어서, email은 주소 오타로 인해서 전달되지 않을 수 있다. 우편도 마찬가지이다. 응답대상자마다 어떤 방법을 사용하였느냐에 따른 선호가 다를 수 있다. 이유 불문하고 설문이 전달되는 방식이 다른 응답률을 가져온다.

For example, an email may not be delivered because of a typographical error in the address; whereas a postal letter may be lost somewhere in the postal mail system. Moreover, potential respondents may have different comfort levels with different survey methods, par- ticularly electronic surveys (Groves et al. 2002; McMahon et al. 2003). Regardless of the reason, the mode of survey transmis- sion can yield a different response rate (McMahon et al. 2003; Beebe et al. 2007).


우편 설문이 웹 설문보다 초기 응답률을 높이기에는 더 낫다. 이는 심지어 컴퓨터에 익숙한 대학생 그룹 사이에서의 연구 결과도 마찬가지이다. 다만, 한 차례 이메일로 reminder를 주는 것은 웹 설문의 응답률을 single-request 종이설문만큼으로 응답률을 높였다.

Postal surveys appear to have better initial response rates than web surveys, even among computer literate college students using the internet for their classes in the USA as recently as 2011 (Millar & Dillman 2011). Of note, a single e-mail reminder raised the response rate of the web option to approximately that of a single-request paper questionnaire (51%) (Millar & Dillman 2011).


혼합된 방법으로 전달되는 설문(이메일+우편)은 정확히 동일한 언어로 쓰여져야 하며 문항 응답시 서로 다른 방식에 따른 차이의 영향을 최소화하게 설계되어야 한다.

Surveys distributed by mixed modalities (e.g. email and postal mail) should be written with the exact same language and format for all survey modalities to minimize the impact of different delivery modality on item response (Dillman & Christian 2005).


설문 시행자는 온라인 설문을 시행하는 것이 컴퓨터/스마트폰/태블릿 등 여러 종류의 기기에서 거의 동일하게 나타날 수 있게 해야함을 유의해야 한다.

Survey administrators must keep in mind that offering an online survey means offering a survey that almost invariably will be accessed by computer, smartphone and tablet, which has extensive design implications discussed in the final section of this Guide (Buskirk & Andrus 2013).


마지막으로, 설문 전달과 관련한 대부분의 연구는 '일반 인구'를 대상으로 했으나, 의학에서는 보다 captive audience를 대상으로 한다.

Finally, most studies on the topic of survey delivery, such as postal mail or door-to-door surveys, use the general popula- tion (Dillman 2011). However, medical trainees represent a more captive audience than found in the general population (Groves et al. 2002).

 

 

자료를 제시할 때 'AAPOR RR6에 따라 100명의 잠재 응답자 중 75%의 응답률을 얻었다'

When presenting the data, we suggest the following: ‘‘We obtained a response rate of 75% for the 100 potential respondents, AAPOR RR 6’’ (AAPOR 2011).



  • 여러 방식을 혼합함 Mixed survey methods (e.g. internet and postal mail) (Beebe et al. 2007) 

  • 설문 요청을 개별적으로 함 Personalized survey invitations (Dillman 2000a; Edwards 2002) 

  • 금전 인센티브 Monetary incentive (Church 1993; Asch et al. 1998) 

  • 예상 설문 응답시간 제공 안함 Not providing an estimated survey time

  • 1000단어 이하로 만듬 Survey less than 1000 words (Edwards 2002) 

  • 초기 응답 거부에 대한 인터뷰어의 구체적 반응 Specific interviewer replies to initial refusals (Dijkstra & Smit 2002) 

  • 응답대상자와 다시 접촉 Repeated contact with potential participants (Lynn et al. 2002) 

  • 사전 예고 Pre-notification (Edwards 2002)



설문 (필요성) 인정

Survey acknowledgement


설문 인정이란, 응답대상자가 이번 설문에 응답률이 높아야 한다는 필요성을 인식하는 것이다. 보통 응답대상자들은 심지어 자신이 설문지를 받았다는 것 조차 인식하지 못한다. 우편으로 설문을 받은 사람 중 10%는 받았다는 기억이 없다. 개개인별로 우편을 전달하고, 구체적으로 목적을 기술함으로써 응답자의 '인정'을 향상시킬 수 있다. 인터넷 설문의 경후

      • '명확한 제목',

      • '응답자가 아는 사람으로부터의 발송',

      • '응답자의 이름을 salutation에 포함시키기',

      • '참여를 요청하는 간략한 초대글 넣기'

등이 필요하다.

Survey acknowledgement – recognition by the potential – respondent that s/he received a survey is essential to achieve high response rates since potential respondents often do not even realize they received a survey. Approximately 10% of those in the general public who receive a postal mail questionnaire do not remember receiving it (Dillman 2000a). Personalizing the delivery and clearly stating the purpose of the mailing may improve acknowledgement by the recipient (Maheux et al. 1989; Dillman 2000b; Edwards 2002). For internet questionnaires delivered by email, for example, the survey should: include a clear subject line, be sent from someone the respondent knows (if possible), address the respondent by name in the salutation, and include a brief invitation to participate.



참여자 협조

Participant cooperation


통제가능요인과 통제불가능 요인이 있음

Factors that may influence potential participants’ decision to cooperate can be conceptualized as controllable or uncontrollable by the researcher.



통제가능요인

Controllable factors


인센티브

Incentives


공공부문과 의사 대상 설문 모두에서 지속적으로 초기 요청을 보낼 당시에 금전적 인센티브를 포함하는 것이 응답률을 유의미하게 높인다는 것을 보여주었다. '돈'이 '비금전 선물(음식 등)'보다 더 효과가 있다.

Studies in both the public sector and among physician respondents consistently demonstrate that monetary incentives included with the initial request significantly improve response rates (Berry & Kanouse 1987; Church 1993; Edwards 2002; Singer 2002; Keating et al. 2008). Money also yielded better response rates than nonmonetary gifts such as food (Church 1993).


응답률을 높이기 위해 필요한 최소 인센티브가 얼마인지는 불확실하다.

The minimum amount of monetary incentive needed to improve survey response rates remains unclear for medical trainees.

 

 

 

일부 연구자들은 인센티브의 의도하지 않은 결과를 우려한다. 예컨데 응답이 더 긍정적이 된다거나, 다른 추가 설문에서도 인센티브를 요구한다거나 하는 것 등이나. 그러나 일부 일반인구대상 연구에서는 응답 결과에 차이가 없음을 보여주기도 했다.

 

It should be noted that some researchers voice concerns about the unintended consequences of incentives, such as providing more optimistic responses or requiring incentives to participate in other, future surveys (James & Bolstein 1990;Shettle & Mooney 1999; Singer et al. 1998). However, several general population studies found no differences in survey answers. 

의학교육연구자에게 현실적인 제언은 (설문을 마칠 때가 아니라) 처음 요청할 당시에 2~5달러 정도의 인센티브를 주는 것이다. 비록 이러한 것이 '설문을 마치지 않았는데도 돈을 준다'라는 점에서 직관에 반할 수도 있지만, 앞서 언급된 연구로부터 처음에 주는 경우에 응답률이 더 높았다. 

A practical recommendation for medical education researchers, who often work on a limited budget, is to consider sending a monetary incentive between $2 and $5 with the initial survey request that does not require completion of the survey to keep the money. Although this may seem counterintuitive to not require survey completion to receive the incentive, the aforementioned research provides evidence that providing the incentive before completion yields signifi- cantly higher response rates.



설문 길이

Length of survey


심지어 설문 길이가 짧은 경우에도 설문 길이에 대한 언급이 없는 편이 길이에 대한 언급이 있는 것보다 낫다.

not mentioning the survey length produced far greater cooperation than even mentioning short survey times


설문 길이는 짧을 수록 좋다.

Several studies of mailed general questionnaires to physicians and the population confirm that shorter is generally better (Edwards 2002; Jepson et al. 2005; McFarlane et al. 2007).

 

더 나아가서, 웹 설문일 경우 설문 길이는 더 중요해진다. 모바일 기기에서 컴퓨터에서보다 약 3배의 시간이 걸리기 때문이다.

Moreover, length may be even more important in electronic surveys since surveys taken on mobile devices take approximately three times longer than surveys taken on computers (Dijkstra & Smit 2002; Mavletova 2013).


심리적 비용

Psychological cost


'참여하고 싶지 않나요?'라고 말하는 것보다 '참 안타깝네요. 당신의 참여가 저에게는 매우 큰 의미거든요'라고 하는 것이 참여를 유도하는데 더 낫다.

For example, rather than the profes- sional repeating, ‘‘You don’t want to participate?’’ a response of ‘‘That’s a shame; your participation means a lot to me,’’ is more successful in getting potential respondents to participate in the survey.



개별화와 권위

Personalization and authority


개인적 관계를 활용하는 것이 스폰서 기관의 권위(대학에서 스폰서함 등)를 강조하는 것보다 통계적으로 유의하게 응답률을 향상시킨다.

Utilizing a personal connection between the researcher and potential respondents also makes a statistically significant improvement (23% absolute response rate increase in one study) in response rates, compared to emphasizing authority from the sponsoring body, such as a university.


이메일의 개별화, 감사인사가 포함된 봉투 등이 잠재적 응답률을 높이는데 도움이 된다. salutation에 대상자의 이름을 쓰고, 제목을 "당신의 교육을 개선시키는데 중요한 연구"라고 쓸 수 있다.

Personalization in emails, addressed envelopes and thank you letters may convince potential respondents to participate (Maheux et al. 1989; Edwards 2002; Lynn et al. 2002). For example, a researcher may write a personalized email to each potential respondent with their names in the salutation and a subject heading such as ‘‘An important study to improve your education.’’



설문 요청 횟수

Number of attempts


비용/효과를 고려했을 때 일반적으로 세 번 정도 시도하는 것이 권고된다. 만약 3번 시도 이후에도 응답이 없으면, 추가적 시도에서 응답을 얻을 가능성은 무척 낮다.

Generally, three attempts are recommended based on cost/benefit analyses and general practices by surveyors (Dillman 2000a; Kellerman & Herold 2001; Willis et al. 2013). If there is no response after three attempts and the contact information is known to be valid, additional attempts will likely have a very low yield (Dillman 2000a; Hamilton 2003).

 

Reminder를 보내는 최적의 타이밍은 연구에 따라 다르다. 약 절반정도의 응답이 설문을 시작한지 1일 이내에 오고, 90%는 2주 이내에 온다. 연구자들은 응답자의 스케쥴(시험 일정 등)을 고려해야 한다. 우리는 1일~2주 사이에, 그 기간 내에 충돌가능성이 있는 일정을 고려해서, reminder를 보낼 것을 권고한다.

The optimal timing for reminders depends on the study. Approximately half of all responses tend to occur within one day of initiating a request and more than 90% tend to occur within two weeks (Hamilton 2003; SurveyMonkey 2011; take Experian 2012; Mailchimp 2014). Researchers should into consideration potential participants’ schedules, such as exams, when planning the study period and sending reminders. We recommend a reminder interval between one day and two weeks, based on other timing conflicts the potential participants may have during that time period.

 

 

설문 시점

Survey timing


 

이상적인 시점을 불분명하다. 구체적인 권고안은 없다.

The optimal time to send email/web surveys is unclear, we have no specific recommendations for email timing for medical educa- tion researchers and offer it as an area ripe for research.


사전 예고 편지와 Reminders

Advance letters and reminders


사전 예고 편지가 도움이 된다. 이러한 사전 예고는 보통 설문 며칠 전 받게 한다.

Advance letters announcing the survey, whether mailed questionnaire or in-person interviewer, are uniformly helpfulin the public survey literature (Heberlein & Baumgartner 1978;Dillman 2000a; Edwards 2002; Groves et al. 2002). The pre-announcement typically arrives a few days before the survey.


Reminder 역시 응답률을 높여준다. 종이 설문이든 웹 설문이든, 편의성이 중요해서 reminder를 보낼 때 한 부의 설문지를 다시 보내거나 설문 링크를 다시보내야 한다.

Reminder communication has also been shown to increase questionnaire rates across the board regardless of modality. Whether paper or web, ease of use is important so the reminder should include another copy of the paper question- naire or another link to the web questionnaire (Dillman 2000a; Johnson et al. 2002).



 

통제불가능한 요인

Uncontrollable factors


성별

Gender


McFarlane 등은 여성이 초반에는 응답 가능성이 더 낮으나, 이러한 차이는 여러 차례 응답을 요청하여 해소된다. 이러한 결과는 다른 집단에서도 확인되었다. 비록 응답대상자의 성별은 통제불가능하지만, 반복해서 요청하는 것은 통제가능하므로 활용해야 한다.

McFarlane and colleagues found females less likely to respond initially, but that discrepancy was resolved with repeated survey attempts. The findings were repeated in other populations as well (Groves & Couper 1996; Groves et al. 2000; McFarlane et al. 2007). Thus, although the gender of each potential respondent is not controllable, the option to make repeated survey attempts is controllable and should be utilized to improve female representation in surveys.


중요점

Salience


주제가 얼마나 중요한 것인지가 전체 혹은 일부 문항 응답에 영향을 준다. Groves의 leverage- saliency theory 에 핵심 부분인데, 주제의 중요성이 응답률에 영향을 준다는 것이다. leverage-saliency theory 에서는 설문에 참여할지를 결정하는 것은 응답대상자와 구체적 주제와의 복잡한 상호작용이라고 했다.

Topic salience (relevance to an individual) can impact both unit and item responses. It is a central part of Groves’ leverage- saliency theory and research supports that a topic’s saliency can influence response rates (Groves & Couper 1996; Groves et al. 2000, 2002; NRC 2013). The leverage-saliency theory proposes that the decision to participate in a survey depends on a complex relationship between each potential respondent (such as likes, dislikes, time constrains, opinions about the survey topic, etc.) and the specific survey topic.


설문을 받는 사람은 이 설문이 교과목의 후반부 개선에 사용될 것이라고 말함으로써 학생과의 관계성을 높일 수 있다.

the instructor may create a personal connec- tion for the student with the survey by pointing out that survey responses will be used to improve the second part of the required course. In this case, the student will be directly impacted by the survey and has incentive to participate.


인터뷰어

Interviewer


공공부문에서의 연구는 인터뷰어에 따라 응답률이 서로 다르게 나타남을 보여주었다. 그 이유는 대화를 끌고 나갈 인터뷰어의 자신감에서부터 응답대상자에 이르기까지 복잡하다. 흥미롭게도, 인터뷰어의 script는 in-person interview에서 응답률에 영향을 주지 않았다.

Research in the public sector has demonstrated a difference in response rates related to interviewers, especially interviewer experience (Groves & Couper 1996, 1998; Groves et al. 2002; NRC 2013). The reasons are complex, varying from interviewer confidence to ability to tailor the conversation to the potential respondent (Groves & Couper 1998; Durrant et al. 2010). Interestingly, interviewer scripts do not alter response rates for in-person interviews.




문항 비응답

Item nonresponse



최소한 한 개 이상의 연구에서 refusal conversions(처음에는 응답하지 않겠다고 했다가, 다시 응답하기로 마음을 바꾼 경우에) 문항을 건너뛰거나 추가 문항에 NO라고 응답한 경우가 더 많음을 보여주었다.

At least one study demonstrated that refusal conversions (potential participants who initially declined participation but eventually participated after further contact) skipped or provided NO for more questions than participants who did not have to be converted from refusal status (Beatty & Herrmann 2002; Mason et al. 2002).



의견 없음

No-opinion (NO) responses


'cognitive hindrance'라는 용어는 응답자가 질문에 대한 답을 몰라서 어떤 질문에 대답할 수 없었을 때를 말한다.

'motivational hindrance'는 그 반대로, 응답자가 답을 알지만 답을 쓰기 싫을 때를 말한다.

The term cognitive hindrance describes when a person cannot respond to a question because he/she simply does not know the answer. A motivational hindrance, in contrast, occurs when the potential respondent knows the answer but does notwant to provide the answer.


모르는 사람에게 드러내기 싫은  개인적 질문에 대해서 motivational hindrance가 나타날 수 있다.

personal question that the respondent would rather not divulge to a stranger, such as a vice (DeMaio 1984; Beatty & Herrmann 2002; Krosnick 2002).


NO 옵션은 응답자가 어떤 질문에 대한 지식이나 의견이 없을 경우에 불가피하게 거짓 정보를 제공해야 하는 오류를 방지하기 위한 것이다. 따라서 NO는 cognitive hindrance를 주의하는 것이며 motivational hindrance에 대한 것이 아니다. 현재의 권고사항은 '태도나 행동 질문에 대해서는 NO옵션을 주지 말라'는 것이다. 이는 왜냐하면 응답자가 의견을 통해서 '학습'하기 때문이며, 자신의 의견에 대해서 자신이 없더라도, 실제 성과를 강하게 예측해주기 때문이다. 수십년의 연구결과를 보면, 응답자가 태도 질문에 NO에 응답한 경우는 대체로 cognitive work를 하기가 싫어서 이거나, 민망한 응답을 드러내기 싫어서인 경우가 대부분이다.

The NO option is intended to prevent forcing errors of commission that result when respondents without knowledge or opinion of a question provide essentially false information(Gilljam & Granberg 1993; Visser et al. 2000; Beatty option &isHerrmann 2002; Krosnick 2002). Thus, the NO intended as a caveat for cognitive hindrances, not motivational hindrances. Current recommendations from most survey researchers are not to provide a NO option for attitude and behavior questions. This is because research demonstrates that respondents’ ‘‘leanings’’ toward opinions – even if they are not confident about those opinions – strongly predict real outcomes (Craig & McCann 1978; Bradburn & Sudman 1991;Gilljam & Granberg 1993; Visser et al. 2000; Krosnick 2002;Beatty & Herrmann 2002; Elliott et al. 2005). Decades of research repeatedly demonstrate that most of the time respondents check NO to attitude questions simply because they do not want to do the required cognitive work for the question or reveal an embarrassing response (Craig & McCann1978; Bradburn & Sudman 1991; Dillman et al. 1993; Beatty &Herrmann 2002; Krosnick 2002; Elliott et al. 2005). 



cognitive question에 대해서는 주제나 응답대상자에 따라 NO 옵션을 주는 것이 필요하지만, NO가 아닌 다른 보기를 선택하는 경우에 그 응답에 대해서 100%의 확신이 없더라도 그것이 편안하게 느껴지게 해야 한다. 명확한 지침이 없을 경우, 어떤 응답자는 약간의 의심이 들 때 NO를 선택할 것이며, 어떤 응답자는 매우 의심이 들 때 NO를 선택할 것이다

Providing a NO option for a cognitive question may be necessary, based on the question topic and potential respond- ent, but the question should define the desired accuracy of judgment so people feel comfortable selecting something other than NO if they are not absolutely certain about the answer (Craig & McCann 1978; Bradburn & Sudman 1991; Dillman et al. 1993; Beatty & Herrmann 2002; Redline & Dillman 2002; Elliott et al. 2005). Without explicit instructions, one respondent may select NO if she has any doubt whereas another may select NO only if she has great doubt about the answer.


문항 건너뛰기

Skipped questions


 

일반적으로, 어떤 방식이든 설문은 가급적 짧게 만들어서 피로감을 덜어주어야 한다. 추가적으로, NO 보기가 주어질 경우 cognitive and motivational hindrance는 문항-비응답을 일으킬 수 있다.

In general, surveys of any modality should be kept as short as possible to avoid survey fatigue and subsequent item nonresponse. In addition, cognitive and motivational hindrances can cause item nonresponse in any survey modality if a NO response is provided.



우편 설문

Postal and paper questionnaires


응답자-친화적 설계가 중요하다.

Respondent-friendly designs are important and improve item response on postal questionnaires. Design points include:

    • (1) providing single-step instructions on a standard size sheet of paper,

    • (2) avoiding extra papers such as inserts,

    • (3) 이름은 한 번만 쓰게 listing name (if necessary) and demographic information only once (rather than on each page),

    • (4) 단면으로 using single sided forms,

    • (5) 아무 필기구나 사용하게 allowing the use of any writing tool,

    • (6) visually emphasizing the areas of the page that have the questions so respondents are not distracted by the optical scanner markings (Hox & de Leeuw 2002; Redline & Dillman 2002; Franko & Tirrell 2012).


종이 설문에서 질문을 branching하는 것은 문항-비응답 가능성을 높인다. 비록 이러한 비응답을 낮추기 위한 시각적 설계가 가능하지만, 가급적 사용하지 않기를 권고한다.

It is well established that branching questions in paper questionnaires are associated with greater item nonresponse rates (Hox & de Leeuw 2002; Redline & Dillman 2002; Franko & Tirrell 2012; Buskirk & Andrus 2013). Although there are visual designs to reduce the nonresponse impact, we recom- mend against any branching questions on paper question- naires.


 

면-대-면 설문, 전화 설문

Face-to-face and telephone surveys



웹/이메일 설문

Web/email surveys


웹 설문은 스마트폰의 활용과 함께 특히 어려워졌다. 스마트폰에서도 잘 보여야 한다.

The web questionnaire visual interface is particularly challenging with the increased use of smartphones (Franko & Tirrell 2012; Buskirk & Andrus 2013; Cullen 2013). A web survey designed for a standard desktop computer will appear jumbled and can even have answer spaces associated with the wrong question. Thus, researchers must choose to:

    • (1) 모바일에서 못 하게 한다. block mobile device users from accessing the desktop- formatted survey 

    • (2) 모바일용을 만든다 create additional surveys specifically designed for various mobile devices 

    • (3) 모든 기기에서 가능하게 만든다 provide an internet version that adjusts to any device (responsive design) 

    • (4) 응답자를 환경에 적응시킨다. force mobile users to adapt to the desktop interface (not recommended) (Franko & Tirrell 2012; Buskirk & Andrus 2013; Cullen 2013).


다양한 인터페이스에서 설문문항이 확실히 잘 보이게 만드는 것이 중요하다.

The overarching principle for reducing question skipping on web surveys is ensuring the questionnaire is clearly visible for various interfaces.


 

이러한 플랫폼을 염두에 두어야 함

Web surveys should be built with these platforms in mind so as to reach as many respondents as possible.


설문 서비스를 활용하라

We recommend using a survey service that offers a web interface that is designed to work well with most platforms and that surveys are tested by the questionnaire writers in all of the platforms. Use simple question layouts to avoid interface issues.


각 페이지에 얼마나 많은 수의 문항을 넣을 것인가도 고려사항이다. 페이지를 늘리는 것이 총 응답시간을 늘리고, 부담을 증대시킨다. 그러나 한 페이지가 너무 길면 그 페이지 중간에 응답자가 떠날 수 있다. 최적의 숫자가 명확하지는 않지만, 페이지당 다수의 문항을 넣는 것이 문항 응답률을 높인다.

Another important consideration for web surveys is how many questions to put on each page. Adding pages increases the total survey time (Dillman 2000a; Couper et al. 2001; Sue & Ritter 2012; Mavletova 2013), which increases survey burden. However, a long page may leave the respondent lost on the page, especially if he is using a mobile device. Optimal page length and number of questions on a page is not clear, but the literature consensus is that multiple questions per page improves item response rates (Dillman 2000a; Couper et al. 2001; Groves 2006; Sue & Ritter 2012).




간략한 도입 설명 Use a brief introduction (Dillman 2000a) 

Respondent already read information about the study in the invitation to participate 



간단하고 흥미로운 첫 질문 Provide an easy, interesting first question (Dillman 2000a). 

Motivates respondent 



개인식별번호를 넣도록 함 Consider a Personal Identification Number (PIN) to enter the questionnaire (Dillman 2000a) 

Prevents people outside of the study from accessing it and makes longitudinal studies easier 



각 질문이 숫자로 시작하게, 응답은 질문에 최대한 가까이 있게 Begin each question with a number and place the response to it as close as possible to the question (Dillman 2000a) 

Questionnaires aligned as blocks tend to be distorted when converted to different sizes 


지나친 색은 지양 Avoid excessive colour. Black on white is adequate (Dillman 2000a). 

Overlay of text on colour can be distorted depending on the user’s device palette 


각각의 질문-보기 짝을 분리시킴 Separate each question/answer pair (Dillman 2000a) 

Reduces wrap-around text and displacement of answer blocks that forces questions on top of one another 


"floating window"고려 Consider a ‘‘floating window’’ accessible by touching or hovering the cursor over a question (Dillman 2000a) 

Reduces propensity for wrap-around text and reduces total screen text. However, may not be viewed by everyone 


drop-down box의 숫자를 한 페이지에 보이게 제한함 Limit the number of drop-down box choices to a number visible at once on any screen (Couper et al. 2001) 

The options first visible may influence respondents to choose those answers over those for which they must scroll to view 


다운로드 시간에 영향을 주는 그래픽은 최소화 Minimize graphics and other features that increase page download time (Couper et al. 2001; Sue & Ritter 2012) 

Longer download times are associated with greater survey abandonment 


progress bar 포함 Include a simple progress bar (Couper et al. 2001; Sue & Ritter 2012) 

Motivates respondents to continue, but not strong evidence. Progress bar may increase download time


closed-ended or short open-ended questions 사용 Use closed-ended or short open-ended questions when possible rather than long, open-ended questions (Couper et al. 2001; Sue & Ritter 2012) 

Longer entry time reduces item response rates 


 

횡스크롤 사용금지 

Do  not require horizontal scrolling to view questions (Sue & Ritter 2012) 

Difficult interface for respondents; confusion over which questions and answers correspond 


‘‘double banking’’ or ‘‘triple banking’’ 고려

Consider ‘‘double banking’’ or ‘‘triple banking’’ (two or three columns of answers, respectively) for lists of answers longer than a single screen view (Dillman 2000a; Sue & Ritter 2012) 

Ensures that all possible answers have similar probability of being seen 


Arial, Times New Roman,or Verdana 사용

Use Arial, Times New Roman,or Verdana fonts at 14 point (Sue & Ritter 2012)

Fastest and most accurate reading fonts




비응답 편향

Part two: nonresponse bias



비응답편향 관련 요인

A. Factors Associated with Nonresponse Bias

   Respondent does not have a relationship with the sponsoring organization.

   Government sponsored surveys.

   Interviewer-administered questionnaires (compared to self- administered).

   General population (compared to specific populations).

   Attitudinal type questions (compared to behavioral and demographic questions). 


비응답편향 무관 요인

B. Factors Not Associated with Nonresponse Bias

   Prenotification of the survey.

   Incentives to participate in the survey.

   Health topic (compared to other topics generally).

   Urban (compared to mixed locations of homes).

   Majority ethnicity (compared to minority ethnicity).

   Topic relevance to the potential respondent (topic salience).

 

 

 

 

 

비응답편향의 이해

Understanding nonresponse bias


비응답 숫자와 비응답 편향의 가능성의 관계는 매우 낮다. 응답률과 비응답편향은 전혀 다른 두 가지 측정이다.

We now know that the number of nonrespondents and the probability of nonresponse bias are very poorly related (r ¼ 0.3) (Groves 2006; Halbesleben & Whitman 2013). Response rate and nonresponse bias are two different measurements, each of which provides different information to readers.


응답률이 비응답편향에 영향을 주려면, 응답하지 않은 사람과 설문의 질문 사이에 반드시 어떤 관계가 있어야 한다.

For example, a study on the professionalism of being tardy is likely to have nonresponse bias from people who arrived late and never turned in a survey. The reason the people did not turn in the survey (they were tardy) was related to the survey’s topic of interest (tardiness). A relationship must exist between the reason people did not respond and the questions being asked on the survey. Therefore, a low response rate does not in itself confer any bias. If the reason for the low response rate has nothing to do with the survey topic, then there should not be any bias. There must be a relationship between the reason for nonresponse and the survey, and the opinions of those who did not respond must differ significantly from those who did respond. This is a very different notion from previous conceptions of nonresponse bias.



 

비응답편향 계산

Calculating nonresponse bias


진정한 의미에서 비응답편향을 계산하는 것은 불가능하다.

It is impossible to calculate a true nonresponse bias since it is a number that does not exist. We must use either proxy data or proxy nonrespondents.


비응답편향 계산법

Ways to calculate nonresponse bias can be conceptualized as either

(1) 비응답자의 proxy를 활용함 measuring the variable of interest for proxy nonrespondents or

(2) 실제 비응답자에 대한 supportive data를 활용함 measuring supportive data (a proxy for the variable of interest) for the real nonrespondents. 


 


관심 변인 평가 방법

Methods evaluating the variable of interest




There are several methods for calculating response bias using proxy nonrespondents; here we explore the more common ones: wave analysis and follow up analysis.



1. wave analysis

늦게 응답을 하는 사람들이 nonrespondents의 proxy이다.

In wave analysis, late respondents (those who require reminders to respond) are proxies for nonrespondents. The responses in the last wave of surveys returned (such as after the final email reminder) are compared to the first wave of responses (such as the initial invitation to participate). This method uses late respondents as nonrespondent proxies and is a commonly used, simple, and well accepted method to evaluate nonresponse bias. Researchers can point to their wave analysis to directly demonstrate whether or not their study suffered from nonresponse bias. See Jutel and Menkes’ article as a published, real data example (Doherty & Ellis- Chadwick 2003; Jutel & Menkes 2009).


 

2. follow up analysis

어떤 시점에서까지 응답하지 않은 사람에게 매우 짧은 설문을 하게 하는 것. 여기에 응답한 사람들이 비응답자의 proxy가 된다.

Follow-up analysis is a common method in which researchers contact potential respondents in the sampling frame (population of interest) who are up to that point still nonrespondents and ask a very shortened survey, such as one or two most important questions, without demographic or other supportive information. The follow up respondents are considered proxies for the nonrespondents. Nonresponse bias is then calculated for the variable(s) asked of the (up to that point) nonrespondents. It is often difficult to obtain a large enough number of follow-up respondents amongst people who were already not responding to the survey. Nor is there a defined minimum number or proportion of follow-up respond- ents to analyse the data. Additionally, this method does not provide any information such as demographics or secondary survey questions to explain why the follow-up respondents are different from the original sample. In the wave analysis, researchers can look through the other supporting information available for all of the respondents, but not in follow-up analysis. A study by Doherty and Ellis-Chadwick is a good practical example of how to conduct a follow-up analysis (Doherty & Ellis-Chadwick 2003).



Supporting data의 평가방법

Methods evaluating supporting data


 

population comparison method

 

예) 1학년 대상 설문을 한다면, 응답자 중 여성응답자의 비율을 1학년 전체 중 여성의 비율과 비교하는 것

The population comparison method uses different sources for information about nonrespondents to compare demographic information between respondents and nonrespondents or between respondents and the entire population. For example, a researcher giving a survey to the entire first year medical student class could compare the proportion of female respondents against the proportion of females in the entire first year class, information that is typically readily available from the dean’s office.


 

중요한 점은, 남성의 응답률과 여성의 응답률을 비교하는 것이 아니다.

It is important to recognize that the population comparison method is NOT comparing the response rate between demo- graphic groups of responders. (e.g. whether the response rate for males was similar to the response rate for females).




 

 




 2016 Mar;38(3):217-28. doi: 10.3109/0142159X.2015.1105945. Epub 2015 Dec 9.

 

Improving response rates and evaluating nonresponse bias in surveysAMEE Guide No. 102.

 

Author information

  • 1a Stanford University , USA .
  • 2b University of Chicago , USA .
  • 3c Uniformed Services University (USU) , USA.

Abstract

Robust response rates are essential for effective survey-based strategies. Researchers can improve survey validity by addressing both responserates and nonresponse bias. In this AMEE Guide, we explain response rate calculations and discuss methods for improving response rates tosurveys as a whole (unit nonresponse) and to questions within a survey (item nonresponse). Finally, we introduce the concept of nonresponse biasand provide simple methods to measure it.

PMID:
 
26648511
 
[PubMed - in process]


+ Recent posts