Social Research update logo
Issue 8 Winter 1994

Social Research Update is published quarterly by the Department of Sociology, University of Surrey, Guildford GU2 7XH, England. Subscriptions for the hardcopy version are free to researchers with addresses in the UK. Apply by email to

Telephone methods for social surveys

Roger Thomas and Susan Purdon

Roger Thomas is Director and Dr Susan Purdon is a Senior Researcher at the Survey Methods Centre at Social and Community Planning Research (SCPR). The Survey Methods Centre conducts and publishes original, mainly empirical, research to test and improve all aspects of survey methods, including statistical design, instrument design, data collection, coding and classification and data analysis. It also carries out survey projects using innovative techniques, undertakes consultancy, teaches survey methods and disseminates information on new developments and good practice through seminars and a newsletter. This Update is partly based on research conducted for the Health Education Authority to evaluate telephone interviewing as a means of collecting information about health and health-related behaviour.

The main attraction of telephone interviewing is that it enables data to be collected from geographically scattered samples more cheaply and quickly than by field interviewing, but avoids the well-known limitations of postal surveys. Other advantages are: that interviewing from a central telephone unit lends itself to careful supervision and control; and that it is possible to avoid cluster sampling, which incurs unfavourable statistical design effects but has to be used in field survey designs to control interviewer travel costs.

The problems, on the other hand, are to do with obtaining adequately representative samples of the general population and adequate response rates when persons or households are approached "cold" by telephone. Doubts have also been raised about the quality of the data, compared with face-to-face interviewing.

In this Update we consider to what extent - and in what circumstances - the potential advantages have been realised and to what extent technical problems and doubts about quality remain.


A critical problem raised by telephone-based surveys is that of obtaining representative probability samples. About ten per cent of the general population do not have a telephone in their home; and about a quarter of those who do have telephones have ex-directory (unlisted) numbers. The proportion of homes without telephones is slowly falling, but the proportion of ex-directory numbers is rising. A complete and accessible listing of domestic telephone numbers does not exist and is unlikely to do so in future.

In the face of these problems there is a division between the approaches adopted by quota and random samplers. Quota sampling assumes that a sample constructed by accepting persons who are immediately available for interview will be sufficiently unbiased, so long as it satisfies the quotas (and sometimes other constraints). But differences within quota cells between those who are reachable by telephone and ready and willing to respond (included) and those who are not (excluded), may nevertheless bias the results. The problem is exactly analogous to that which exists with field surveys based on quotas, where checks have shown that 60 to 70 per cent of the individuals approached may fail to respond.

Random sampling, by contrast, requires a process for selecting members from a determinate population that enables each case to be assigned a probability of selection. No substitution of easy for hard-to-interview cases is allowed. Random samplers therefore worry about exclusions from the sampling frame, about uncontrolled variation in selection probabilities and about nonresponse that rises higher than (say) 20 to 30 per cent.

Households with no telephone

There is, by definition, no direct way of covering persons who do not have a telephone in their home in a "telephone-only" survey. Persons and households without telephones are a deprived group which social researchers may be particularly keen to represent accurately. Such households tend to be small and to be headed by young adults and adults who are themselves unemployed, on low incomes, single parents etc. On the other hand they are now a smallish minority.

Work has been done to see whether "telephone-only" samples can be reweighted so as to compensate, at least in part, for the omission of non-telephone households (Purdon and Thomas 1994). The use of differential weights tends to increase the variance of estimates, but that may be worthwhile if it sharply reduces bias. Reweighting is routinely applied to the quota samples used in market research as a way of supplementing the fieldwork controls (quotas), but the effectiveness of the strategy depends on the method of weighting used. "Off the shelf" weighting systems are not necessarily optimal for counteracting the biases due to omission of non-telephone households.

Development of an appropriate weighting algorithm depends on use of data from face-to-face surveys which ask both about telephone ownership and about the output variables of interest, e.g. health-related behaviour. Particular ways of weighting the data can be shown to make a substantial contribution to removing the bias in overall population estimates. Thus for example, a telephone-only sample will provide estimates of the prevalence of smoking in the adult population which are downwardly biased because non-owners are much more likely than average to be smokers. Weighting can remove the greater part of this bias (though not necessarily other biases).

However, weighting cannot get around the fact that the proportions and actual numbers of younger, poorer people found in "telephone only" samples will be smaller than they should be; and that the younger, poorer people who do appear in samples may not be typical of younger, poorer people generally. It would therefore be hazardous to use a "telephone only" sample to study smoking among young or poor adults, or indeed to study young, poor adults generally.

Households with ex-directory numbers

Persons who are members of households with ex-directory numbers also differ from the population mean. They are likely, for example, to be younger than average, to live in cities and, in particular, to be young women living alone. Since the part of the population which is ex-directory is much larger than the proportion of the population which does not have a telephone, the scope for bias is substantial and is in practice superimposed upon the "non-owners" bias. Therefore telephone surveys based upon samples drawn from public directories are unlikely to be satisfactory, even when weighted.

The ex-directory problem is not unique to Great Britain. In the USA, for example, much work has been done to develop "random digit dialling" (RDD) as a means of providing representative probability samples of all telephone owners, including those who are ex-directory (Lebkowski 1988). This method can give coverage of all domestic numbers with known probability of selection by using a two-stage system of selection. At the first stage blocks of numbers are randomly selected (a "block" might be the first 7 digits of a number). Within each selected block a number is dialled at random and if the number dialled is a private domestic number, the block as a whole is included in the main sample. Numbers within the retained blocks are dialled until a fixed number of domestic numbers have been identified. Interviews are then sought with their owners.

RDD starts from the population of all telephone numbers which have the standard numeric structure. However, in practice a high proportion of the possible range of numbers will be not in use, commercial, not for voice traffic etc., so randomly dialling numbers within the whole of the possible numeric range produces very low hit-rates. What makes this crippling in practice is the fact that redialling numbers which appear to ring but do not reply uses up a large amount of time and effort, but is necessary in order to establish whether they are out of scope or belong to households which seldom reply.

Therefore, successful application of RDD requires that the sampler have a detailed knowledge of how the system telephone numbering is structured (e.g. which blocks of numbers are not in use, or reserved for special purposes). Such information enables the hit rate for RDD ­ that is, the proportion of selected numbers that will in fact yield private households ­ to be raised to economic levels.

Historically Britain has not had a uniform system of telephone numbering and this situation is likely to remain for some time. Furthermore, there is an increasing tendency for people to wish to control access to themselves by telephone. British Telecom as provider of the telephone service is wary of being seen to frustrate this wish and is not forthcoming with information which would assist RDD sampling by identifying in-use ranges of numbers.

There is at least one British example of a successful RDD survey of health-related behaviour in the general population and this shows that in certain circumstances at least some of the problems can be overcome (McQueen et al 1989). However, the application in question is confined to areas of the country where the telephone system is relatively tractable. Extending the method to the whole country, so as to make national telephone surveys possible as routine and with predictable costs, would involve considerable further investment and would still run some risk of an unsuccessful outcome.

In market research telephone surveys the "directory plus 1" system is often used as a means of getting at households which are on the telephone, but have ex-directory numbers. A sample is first drawn from the public residential telephone directory and then 1 is added to each number drawn to provide the list of numbers actually dialled. This has the effect of drawing into the sample some households which have unlisted numbers. However, in areas where unlisted numbers are prevalent not enough numbers will be found in the first place, so the residents of such areas will still be under-represented in the "directory plus 1" sample. In theory this might be corrected if we knew what proportion of domestic numbers in each exchange area were ex-directory, but that is the kind of information that BT is unwilling to provide.

Obtaining response

As with face-to-face interviewing, the reasons for nonresponse to telephone surveys can be divided into non-contacts and refusals. Some centralised telephone interviewing installations in this country now use automated call-scheduling systems. With these the making of multiple calls to catch the seldom-at-home is easier and cheaper. However, such systems are still bedevilled by the presence of out-of-scope numbers which appear to ring but never reply. Apart from the waste of time and effort, there are problems in deciding whether to classify these cases as nonresponse or as out of scope.

After contact has been established with someone at a number there are some additional sampling problems to be faced which increase the risk of telephone nonresponse.

In the first place, the interviewer has to determine whether the number is residential and deal consistently with, for example, businesses run from the home, portable phones and communal phones. Secondly, it is necessary to identify households possessing more than one telephone number, so as to establish their probability of selection. Thirdly, it is necessary to establish a unique association between each telephone number and the households or individuals at that number. This is analogous to identifying residents at an address in field interview surveys, but is harder to achieve where the interviewer cannot use observational cues and must rely on obtaining the required information orally.

When the interviewer has made contact with someone at the number dialled, certain field sampling procedures often need to be applied which can be more difficult to explain over the telephone than face-to-face and which incur risk of nonresponse. Often, the interviewer will need to enumerate all household members so that one can be selected at random. Even with field interviewing this is difficult, since detailed information has to be extracted before secure rapport can be established with a specific respondent and the risk of refusal to cooperate further is quite high. These problems are compounded when the procedure has to be administered by telephone.

As regards refusals, there is some truth in the intuitive impression that it is easier to put the phone down than it is to refuse a request from an interviewer calling in person. It is also true that broken-off interviews are commoner on the telephone than face-to-face, probably for similar reasons. These findings have made telephone survey designers cautious about interview length and there tends to be a rule of thumb that it is unwise to attempt interviews lasting longer than 20 minutes or so by telephone. However, much of the evidence comes from market research interviews which many respondents find rather boring and uninvolving. As in the case of face-to-face interviewing, a great deal depends upon the level of interest and involvement aroused by the subject matter.

To the extent that increased non-response tends to be associated with increased bias in survey estimates, lower response in telephone surveys is not a trivial problem. However, it need not be a crippling one. There is some evidence that with experience and effort rates of response will be perhaps 10 percentage points below those that would be expected if the survey were conducted face-to-face.

Quality of information obtained

Another important question to be asked about telephone surveys is whether they are a reliable way of collecting information from individuals. For example, will individuals answer sensitive questions related to their health and health-related behaviour truthfully over the phone? Some research undertaken in the UK suggests that telephone surveys are at least as successful as face-to-face interviews in eliciting such information (Sykes and Collins 1988, McQueen 1989).

Other research suggests that some questions are answered, on average, slightly differently over the telephone. In particular, answers to open questions tend to be shorter and the whole interview procedure tends to proceed more briskly than in the case of face-to-face interviews. For non-sensitive factual questions few differences have been reported in the distributions of responses obtained (though comparison is made more difficult where there is a difference in rate of response). Such differences as do occur may be due more to the fact that visual aids such as prompt cards cannot be used over the telephone, than to any difference in the way respondents react to being questioned by telephone.

In some applications this inability to use multiple channels of communication, be it visual aids or body language, to build up rapport, can be a serious disadvantage.

There has been debate over whether questions of a sensitive or potentially embarrassing nature (for example, about intimate, dubiously legal or socially stigmatised forms of behaviour) are better or worse answered over the telephone. This is an inherently difficult topic to study and depends heavily on assumptions that higher rates of reporting certain behaviours (e.g. alcohol consumption, drug taking) indicate higher validity. It is likely that factors such as perceived confidentiality and the relative impersonality of telephone interaction are involved here. At this stage we cannot confidently assert that telephone methods systematically improve or damage data quality compared with face-to-face methods, although there may be an interaction with whether the prospective respondent agrees to be interviewed in the first place.

In the case of questions involving response scales (for example, where people express agreement with a statement on a scale ranging from strongly agree to strongly disagree) respondents on the telephone have been found to be slightly more likely to choose one of the extreme categories. These differences can affect comparisons between results from face-to-face surveys and telephone surveys, but are not usually so pronounced as to lead to significantly different interpretations of the data.


Taking the survey scene as a whole, telephone interviewing has become commonplace as a data collection method in Great Britain. For surveys of businesses and other organisations it is now standard and indeed may be the preferred mode of data collection, often in combination with postal questionnaire methods. Market research companies also commonly use the telephone to identify and interview quota samples of consumers.

However, there are obstacles to further progress, the most important probably being that no fully satisfactory probability sampling techniques are available.

As regards data quality, initial doubts about the reliability of factual information obtained over the telephone and its comparability with information obtained face-to-face have largely been discounted. Those responsible for the Employment Department's Labour Force Survey (LFS), for example, satisfied themselves that no identifiable "blips" in the data resulted from adoption of telephone methods and the LFS is now conducted by interviewing some households face-to-face and others by telephone. There is evidence of some mode effects on telephone and face-to-face measures of attitudes, but these are not very large and there is no general reason to think that the measures obtained by telephone are less valid (it has been claimed that in some situations they are more valid).

Achieving cost reduction and speed without sacrificing other criteria of survey quality depends on careful selection of applications. The LFS, for example, uses the telephone only in the case of households which have previously been interviewed face-to-face and have agreed to supply their telephone number. Telephone interviewing has not yet become a substitute for face-to-face interviewing across the board.

Further Reading

Telephone Surveys: the Current State of the Art (1991) Joint Centre for Survey Methods Newsletter Vol.11 No.3.
Papers by: Foreman J. 'Random Digit Dialling'; Thomas R. 'Characteristics of Households with and without Telephones'; Pile G. 'Setting up and managing a large scale telephone interviewing facility'; Smith E. 'Telephone Surveys: the Business Research Angle'.

Groves. R.M., Bieler, P.P. et al. (eds.) (1988) Telephone Survey Methodology John Wiley and Sons.
Contains 32 substantial papers by authors from the USA, Great Britain and elsewhere on all aspects of telephone methodology.


Lebkowski, J.M. (1988) 'Telephone Sampling Methods in the United States' in Telephone Survey Methodology, Groves et al. John Wiley and Sons.

McQueen, D.V., Gorst, T. et al. (1989) A study of Lifestyle and Health. A Computer Assisted Telephone Interview (CATI) Survey. An Interim Report. Research Unit in Health and Behavioural Change. University of Edinburgh.

McQueen, D. V. (1989). 'Comparison of Results of Personal Interview and Telephone Surveys of Behaviour Related to the Risk of AIDS: Advantages of Telephone Techniques' in Proceedings of Healthy Survey Research Methods Conference, National Center for Health Services Research and Health Care Technology Assessment (DHSS publication no. (PHS) 89-3447).

Purdon, S and Thomas, R (1994). Feasibility of a Telephone Survey of Health Related Behaviour and Attitudes. SCPR. Unpublished (draft) report to Health Education Authority.

Sykes, W and Collins, M (1988). 'Effects of mode of interview: Experiments in the UK' in Telephone Survey Methodology, Groves et al. John Wiley and Sons.

Social Research Update is published by:

Department of Sociology
University of Surrey
Guildford GU2 7XH
United Kingdom.

Telephone: +44 (0) 1 483 300800
Fax: +44 (0) 1 483 689551

Edited by Nigel Gilbert.

Winter 1994 © University of Surrey

Permission is granted to reproduce this issue of Social Research Update provided that no charge is made other than for the cost of reproduction and this panel acknowledging copyright is included with all copies.