introduction A ROADMAP TO EFFECTIVE EMPLOYEE SURVEYS
This is a book about improving the practice of designing, conducting, analyzing, and taking action from employee surveys.
Today it is easier than ever to conduct employee surveys, and they are widely accepted for gathering organizational intelligence. If anything, the pendulum may have swung too far: fatigue is often cited against fielding yet another survey. Surveys play a central role anytime large numbers of people are included in a sensing initiative. If the organization is changing, an employee survey can provide critical insights into change effectiveness. Surveys can be an effective tool for understanding the drivers of employee motivation and engagement. They can measure key organizational processes from the perspective of the employees most informed about them—those who implement the processes daily.
Despite the prevalence of employee surveys, a number of common survey practices are less than optimal. There are three general areas where survey practices can be improved: (a) strategy, goals, and objectives; (b) design and delivery; and (c) analysis, interpretation, and action taking from the results. This book addresses each of these areas and offers advice for improvement. Guidance is provided on whom to include in the survey, the issues to focus on, and balancing the tradeoffs involved.
The intended audience for the book includes both people who are responsible for designing and implementing employee surveys and those who use them, including HR leaders and practitioners, organizational development (OD) practitioners, and organizational leaders who oversee or use survey results. To keep the content accessible to as broad an audience as possible, a balance was struck between comprehensiveness and length and between more and less technical topics. This means that sometimes a topic is discussed in brief and, where appropriate, sources for additional information are provided.
Part one addresses common practices around employee survey goals, objectives, and methods that lead to suboptimal administrations. Surveys can be very long and cover too many topics. The target survey population often spans dissimilar business units, functions, roles, geographies, and groups of employees. Surveys often are promoted as measuring “critical” employee attitudes like engagement without a clear business case for how those attitudes impact organizational effectiveness and performance. The answers for these challenges are covered in chapters one through three.
Chapter one addresses survey purpose. Recognize the limitations of surveys. Don’t overuse them or use only surveys when other types of assessments might be preferred or complementary, including interviews, focus groups, archival data analysis, direct observation, and so on. Start with defined outcomes that provide maximum support to top organizational priorities. Choose one or two top priorities and focus on them. Be clear about the organizational level best suited for addressing the survey priorities. Addressing multiple levels in the same survey is doable but harder than sticking to one level as the primary focus.
Chapter two focuses on determining the right degree of emphasis on employee engagement. Contrary to common perception about the importance of employee engagement, monitoring and acting to improve employee attitudes is not advisable for most roles as a way to improve business performance. The benefits of improved employee attitudes accrue first and foremost to the employees. Whether the business subsequently benefits depends on the role and context. In certain customer-facing roles, there can be a causal link between employee engagement and business performance. In all other roles the link is tenuous at best and more likely is reversed: employee attitudes improve when business performance is high. Measures of employee engagement are best used as lagging or coincident indicators of business performance, not leading indicators.
How to match the appropriate measurements to the processes, roles, and teams is covered in chapter three. Choose survey questions most appropriate for the primary roles and processes that are the survey focus. The issues that most matter usually are not the same for people in different roles, functions, and geographies; when there are large dissimilarities, it is difficult to effectively address the highest priorities for everyone in a single survey. Even though you can include both individually focused and group-focused measurements, it is very hard to measure well both individual- and group-level issues in the same survey.
Part two addresses survey design and delivery. Despite the proliferation of consulting companies and online software offering tools and guidance, a number of common practices are anything but best in class. Survey questions often are designed without deep knowledge of good survey practices, leading to inaccurate measurements. Opportunities to improve response rates and measurement accuracy are missed. The benefits of matching survey to organizational data are often unrealized. The answers for these challenges are covered in chapters four and five.
Chapter four reviews good survey design practices. Choose survey questions that are clear and to the point and have response codes that maximize ease and accuracy of the responses. Don’t reinvent the wheel; there are many sources for survey questions already written, especially validated questions from the research literature. Minimize tinkering with survey question wording by organizational stakeholders; it is more productive to focus their energies on using the data to support organizational processes and drive change. Use multiple questions to increase the accuracy of measurement while minimizing overall survey length to encourage high response rates.
Chapter five addresses the tradeoff between anonymity and insights. Matching survey responses with other data is needed to show a link to business performance. For employees like salespeople with clear performance metrics, the matching is best when it can happen at the individual employee level. Keeping the identity of survey respondents anonymous is the best way to ensure that they will feel comfortable answering all questions honestly. With anonymous survey responses, however, matching with other data can take place only at the group level. Ensuring anonymity or confidentiality is needed to encourage survey respondents to be honest about sensitive issues. Do not ask for extremely detailed demographic information that could be used to reverse engineer privacy controls and reveal people’s identities in a supposedly anonymous survey. There is a tradeoff between maximum data matching and complete anonymity: choose the right balance for the survey strategy.
Part three addresses analysis, interpretation, and action taking. The desire to make the survey results easy to understand often leads to overusing simplified indexes that combine too many different issues together. Conclusions are reached using analysis that ignores the power of statistical modeling. Action taking decisions too often are based on external benchmarking and not often enough on internal benchmarking. Surveys are designed and implemented with insufficient upfront stakeholder engagement to ensure appropriate action taking. The answers for these challenges are covered in chapters six through nine.
The tradeoff between simple messages and actionable insights is addressed in chapter six. Simple composite indexes are good at capturing general employee moods, but combining multiple measures into a single index usually yields insights no different than a single question on job satisfaction. For deeper actionable insights that can guide leadership decision making, focus on the components of the index, not the aggregated index score. Employee engagement is best measured by focusing on the specific employee attitude(s) you care most about: intention to turnover, job satisfaction, thriving, commitment, and so on.
Chapter seven covers statistical modeling. Analyzing average responses to a survey question or correlations between questions are the most common ways of engaging with survey data, yet they are rarely actionable on their own. Statistical models of employee attitudes yield the deepest insights into the factors that matter for employee engagement, retention, and so on. The results of complex statistical modeling must be presented in a way that all stakeholders can interpret. Survey vendors’ and internal experts’ statistical skills are typically underutilized and should be better leveraged for testing statistical models.
The right way to do benchmarking and interpretation of survey results is tackled in chapter eight. Benchmarking employee survey data to other organizations’ data is widely practiced but not very informative and virtually never actionable. More actionable insights are available from internal benchmarking when it is an “apples-to-apples” comparison of similar roles and work settings and when it is the same group over time. Before you can conclude that two benchmarking numbers are different, you have to consider both statistical significance and practical significance. If the data do not support a difference that is both statistically and practically significant, then it may be due to random factors and almost never is actionable without other corroborating data or information.
Chapter nine covers reporting and taking action. Closely tie survey reporting back to the purpose and desired outcomes for the survey. This will minimize extraneous analysis. Engage the organization under study as broadly as possible in the feedback process. Tailor reporting as needed by role, function, business unit, and so on. Involve key stakeholders early and often in the data collection and analysis process to ensure the greatest likelihood of effective action taking.
At various points throughout the book, references are made to specific survey constructs—sets of questions that together measure a single concept. Examples of specific survey items that can be used for many of these constructs are available in the Resources section at the end of the book.
The book is laid out in order of how surveys are usually designed, conducted, and analyzed, with survey strategy and design coming first. Each chapter stands alone and can be read separately. However, if you would like to get the full benefit of the content, it is advisable to read all chapters before embarking on your survey effort. Though the later chapters address analysis, interpretation, and action taking, some of the points covered there have implications for survey strategy and design—especially if your goal is to maximize the usefulness and impact of your employee survey.