- Good leadership and working to motivate people.
- Training your employees.
- Developing an intelligent interviewing and hiring process.
Types of ValidityContent validity is evaluated by showing how well the content of a test samples the class of situations or subject matter about which conclusions are to be drawn, or how representative the test sample is to the universe of generalization for which it is intended. Example of content validity: The GRE Advanced Test in Psychology should adequately and proportionately represent the different fields of psychology. Example of lack of content validity: The teacher gives an exam over chapters s/he has not covered in class. Unlike most of the other forms of validity, content validity cannot be measured by a statistic, it is usually assessed in terms of expert opinion. Criterion-related validity is evaluated by comparing the test scores with one or more external variables (called criteria) considered to provide a direct measure of the characteristic or behavior in question. Example: self-esteem correlating with GPA. Predictive validity indicates the extent to which an individual's future level on a criterion is predicted from prior test performance. Or, the extent to which future levels on a construct are predicted from present construct scores. Example: Using the GRE-Verbal scores of College Seniors to predict their future Graduate School GPA. Concurrent validity indicates the extent to which the test scores estimate an individual's present standing on the criterion. Or, the extent to which a construct is related to another construct or criterion when both are measured at the present time. Example: Need-for-achievement score correlating with GPA, both measured now. Construct validity is evaluated by investigating what qualities a test measures, that is, by determining the degree to which certain explanatory concepts or constructs account for performance on the test. This is the “ big cheese” of validity and can be seen as incorporating all other forms of validity evidence. In principle, there is a complete theory surrounding a construct, every link of which is empirically verified in construct validation. Construct validation requires the integration of many studies. Convergent validity is evaluated by the degree to which different (hopefully independent) methods of measuring a construct are related and produce similar results. A good metaphor here is a legal trial where the different forms of evidence (e.g., eyewitness testimony, blood samples, fingerprints, fibers converge on the same result and lead to a common conclusion) Example: Self-reported extroversion is related to extroversion as reported by spouse or as rated by an observer. Discriminant validity is evaluated by the degree to which a construct is discriminable (e.g, uncorrelated) from and non-redundant with other related constructs. Example: Your new measure of self-esteem can be differentiated statistically from other established measures of self-esteem (for example, showing moderate to low correlations with cognate constructs, showing different validity patterns and incremental validity). Incremental validity refers to the degree to which a construct (or variable) significantly adds unique variance to the prediction of some other construct or criterion. Example: In a hierarchical regression equation, your new measure of self-esteem adds unique variance to the prediction of Teacher Rating of Competence after the Coopersmith Self-Esteem scale has already been entered into the equation. Known-group validation: refers to predicting and verifying differences on a construct as function of group membership where there is a high degree of a priori consensus about between-group differences on levels of the construct. For example, we would predict and expect to find mean differences on the construct “Attitudes toward Abortion” between “Pro-Choice” and “Pro-Life” groups. In fact, if we did not find a whopping t test difference, we might suspect that something was wrong with our measurement of Attitudes toward Abortion. Or, we might establish known group validation for a measure of schizophrenics using residents of a psychiatric hospital compared to the general population.
Reliability of measurement refers to consistency of measurement. Other synonyms for reliability are repeatability, reproducibility, precision, dependability, fidelity, accuracy, and generalizability. Suppose you wanted to test the reliability (or, if you will, the consistency or reproducibility) of your car odometer. You drive from your house to the post office and measure the distance on your car’s odometer to be 5.1 miles. Then you drive back home again and the distance measured on your car’s odometer is 5.4 miles. Is your odometer a reliable measure? Well, not if you want accuracy to be within tenths of a mile. But it is reliable if you want accuracy to be measured in terms of whole miles. This leads to an important reliability principle: Reliability is relative. How reliable we need our measure to be depends on what we plan to use the measure for. If you are just going on a casual date, you do not need a measure of interpersonal compatibility to be very reliable. If we are hiring the president of a major company, we would want a very reliable measure of, say, leadership potential.
We are usually interested in the reliability of a set of scores. For example, if we give one version of the ACT to a group of 100 students and give another version of the ACT (specifically two parallel forms or forms which measure the same thing to the same degree) to the same group of 100 students, we would like to see the same scores for all people on both forms. We will probably never see exactly the same scores for all 100 people on both forms, but we would like to see a similar rank-ordering of the 100 students on both forms.
There are many different types of reliability:
- Internal consistency and coefficient alpha
For example, if we want to generalize how reliable a measure is over time, we might want to assess test-retest reliability. By way of illustration, if we give a measure of need for achievement to 200 graduate students, then we give the same measure 1 week later to the same 200 graduate students and correlated the two sets of scores, we would be measuring test-retest reliability. We assume that any difference in the rank-ordering of scores is because of unreliability. It is important to point out that there is no one single test-retest reliability for any measure. To illustrate, we might estimate test-retest reliability over a period of 1-week, or 1-day, or 6-months or 10 years. There would surely be different test-retest reliability coefficients for each of these time intervals.
I have been watching the Olympics while I prepare these notes. More specifically, I have been watching the diving competition. At the end of each dive, the different judges give scores. Sometimes the judges are not consistent in their scores. Here we are dealing with inter-rater reliability. If we assessed the correspondence of scores between judges for a group of divers, we would be assessing inter-rater reliability. We might be similarly interested in inter-rater reliability when we look at the consistency of judges at the apple pie contest at the County Fair or the rulings of Supreme Court Justices or the health ratings given to restaurants by State health and safety inspectors. We want reliability to be high in all these cases, because we want to have confidence in the scores produced. I know I would not want to eat in restaurants where the health rating was not reliable. Or ride in an airplane where the safety inspectors’ ratings were not reliable. It is important to point out that, as in the case of test-retest reliability, there is no one single inter-rater reliability for any measure. Different inter-rater reliability coefficients would emerge depending on which raters (or judges or observers, etc.) we chose to study and how many raters we chose to study.
When we develop a measure of extroversion or aggression or intelligence or any other construct, we like to know that the items are all measuring the same thing. We want the items in a measure to be relatively homogeneous (just like you want your milk to be homogeneous and not have crud in it) and for the measure to demonstrate internal consistency reliability, which is achieved by having items that measure the same construct and are correlated with each other. Imagine that you have four items measuring attitudes toward iguana. Item one is “I like iguanas a lot.” Item two is “I would be willing to have an iguana as a pet.” Item three is “I would like to spend a lot of time with iguanas.” Item four is “I like Mozart.” If we used this scale to measure attitudes toward iguanas, I can tell you right now that the internal consistency reliability of the scale would be higher if we just used the first three items, because the fourth item is measuring something different. It is measuring attitudes toward Mozart or maybe attitudes toward classical music. As in the case of the other kinds of reliability, there is no one single internal consistency reliability for any measure. Different estimates would arise as different items and different numbers of items are studied.
One of the most common methods used to estimate internal consistency reliability is coefficient alpha, which was developed by that famous psychologist Lee J. Cronbach, and is sometimes called Cronbach’s alpha. I will not go into how this coefficient alpha is computed, but, suffice it to say, that coefficient alpha typically ranges between 0.0 and 1.0 and we like to see higher rather than lower values. For example, if coefficient alpha for a measure is .80 or higher, we have confidence that the measure is relatively homogeneous and all of the items are measuring a common construct. If the items are measuring the same thing, coefficient alpha increases as the number of items increases. That is one of the reasons measures like the GRE, SAT, ACT, LSAT, and Myers-Briggs have so many items—to increase reliability.
If you want to increase the chance of getting significant results, use measures with higher reliabilities.
Testing for the Right Work-From-Home Personality: What To Look For
- Adaptability. Whether telecommuting only a few days per week or full-time, staff need to be able to adapt their routine to their environment. They’ll need to make on-the-spot adjustments to respond quickly to changes in the environment (which includes disruptions), resolve IT challenges remotely, and possibly need to work outside of normal hours if needed. They may also need to be flexible to be on-site as required when it’s critical for the team’s success.
- Autonomy. Without a supervisor or co-workers providing direction or looking over their shoulder, many work-from-home employees need to be able to make decisions on their own, self-manage, and work independently. They are charged with completing their work and getting results for the company without frequent validation from a supervisor that they are on the right track.
- Openness. Those who are most successful working remotely will be more inclined to seek out and engage in learning new concepts, procedures, techniques, and experiences. People better suited for a work-from-home environment tend to be open-minded, curious, receptive to change, and continually looking for new and better solutions and better ways to do their work.
- Work Drive. Successful work-at-home employees must be oriented toward achieving their work goals in a timely manner regardless of the work environment. They need to be accountable for the same amount of production as if in the office, and ensure telecommuting isn’t delaying completion of tasks. The disposition to work for long hours (including overtime) and an irregular schedule is especially important. They should be willing to make personal sacrifices for their job, and be tolerant of job encroachment on their personal lives.
- Emotional Stability. This is one of the best predictors of success for any job, but it has a particular importance for work-at-home staff. When working at home, employees face additional stressors such as increased interruptions, demands from others (including kids), the need to coordinate schedules with coworkers, social isolation from work peers,and reduced opportunity for feedback and support from their team. When levels of emotional stability are low, work performance and chances for success will likely decrease.
- Optimism. In addition to Emotional Stability, people who have a positive outlook will do better in a work-at-home environment because they’re better able to visualize positive outcomes, possibilities, and solutions. In contrast, a pessimistic disposition makes one more likely to view everything from a negative perspective, give up when jobs get tough, and disregard potentially helpful new ideas.
- Self-Directed Learning. When working at home, there is less opportunity to acquire new learning from supervisors and work peers. Staff must take more responsibility for learning on their own. Those with high levels of self-directed learning are better able to increase their job proficiency and knowledge, skills, and abilities by finding and mastering the material on their own.
- Image Management. Even when there is no formal dress code, grooming standard or direct personal contact with clients or coworkers, an employee working from home needs to behave professionally. On the phone or via email, remote workers need to be perceived as qualified company representatives--ideally, no one should even guess that they are working from home.
Step 1: Identify what an “Outstanding Performer” looks like.Note: Do not proceed with any other step before this one. The whole process will fail without it! One approach is to ask yourself, other managers, and co-workers what defines a “high performing employee” at your company. Perhaps it’s those who prioritize effectively and get work done in the time expected. Maybe it’s the employee who is constantly learning and improving the way they work. In some positions, the superstars might be those who understand the value of relationships with customers and co-workers and constantly work to create & maintain them. It could be a combination of the above, or something else entirely. Create a checklist of behaviors and traits you see in current or past high performers. Use these checklists to create a “profile” of high performance for each role you need to hire for; or better yet, make one for each role in the company.
Step 2: Select Candidates Where Past Performance Can Be Inferred from Their Resume.The key change to your hiring process is to find a strong correlation between what you see on the resume and what you are looking for in a high performer. It seems intuitive, but it’s a frequently overlooked step (especially if you haven’t been hiring based on a profile as outlined in Step 1!) In particular, if you need a goal-minded person, look for resumes that feature “Past Responsibilities” written in terms of goals or results instead of tasks. If you want someone who is conscientious or detail-oriented, don’t look for how they describe themselves. Select only resumes that SHOW the candidate’s attention to detail – good layout / formatting, consistent information, no spelling or grammar errors.
Step 3: Use Personality and Aptitude Assessments on Selected Candidates.After identifying candidates with a higher probability of performance based on their resume, ask all candidates to take a personality and aptitude assessment. With the exception of probationary or trial employment, personality and aptitude assessments are the most valid, reliable, and effective method of increasing your odds of a good hire out of any other step in the hiring process. In addition, a significant correlation is shown between a candidate’s assessment results and how their performance was rated after being on the job. Take a look at the table, below:
|Percent of Restaurant Mgrs Who Were Rated Outstanding on Job Performance|
|Aptitude Score||Bottom 1/3||3%|
|Personality Score||Bottom 1/3||5%|
Step 4: Conduct Structured Interviews Based on Your “Outstanding Performer” Profile.Structured interviews are shown to be more successful than unstructured interviews, and if the structure is built around your profile, the chances of success could increase. In addition to finding supporting evidence that a particular candidate meets your qualifications, take this time to probe into any other pieces of evidence you found that seem contradictory, or to give the candidate an opportunity to discuss areas of weakness. One example: If the candidate looked like a driven worker on resume, but their assessment score for work drive was low, ask them to provide narrative to situations that would highlight this trait.
Step 5: Set clear, well-communicated, and realistic expectations.During a follow-up interview with the final candidate(s), share the “Outstanding Performer” checklist created in Step 1 with them. Be clear and direct about what’s expected of the role. Watch body language and listen to verbal responses to evaluate if they’re still confident they’re the right fit for the job. Before you extend a formal job offer, review the job responsibilities list before you present the offer letter to the selected new hire. It should be consistent with what is outlined in your performer profile.
Remember: You Can’t Make Up in Training What is Missed in Testing.Align yourself with a trusted, experienced, reliable assessment vendor prior to hiring any candidate. Give them a call and talk to them about your needs and find out what assessments are best for you to get the best results. Pre-employment assessment authorities like Resource Associates, who have over 150 job-specific assessments, can create custom testing around the qualities you’ve identified as most important for your high performers.
Age and Personality Test Results: Nothing to Worry AboutWe must admit there is a difference in personality traits with age. This won’t present itself in a discernible way on a candidate’s test results, however. The difference is very small - near negligible. Although attitudes might vary greatly with age, attitude is not a valid predictor of performance. Further, attitudes are not measured in personality testing. No worries there. Some might argue that a different report should be developed for different age groups. With the evidence showing such negligible differences in the report, however, it would not be worthwhile to do so. The individuals interpreting the report would have difficulty distinguishing differences in age based on the results! Personality test results won’t differ much between age groups. There is no adverse impact on race / ethnicity, gender, or age. You can learn more about the results of these validity studies in our PSI Manual.
What does change with age?As mentioned above, one’s life experiences will result in a change in their attitudes over time. This, however, is not directly linked to a change in their personality, and will not be reflected on personality assessment results. Mental ability also tends to decline with age. A person’s personality traits may change slightly with age, but not in a discernible way. It’s common to experience higher openness, higher emotional stability, a slight upward tick in agreeableness and conscientiousness with age. Of course, this is all relative to the individual’s personality at the outset. Everyone has a personality equilibrium. Very few life events can be traumatic enough in nature to alter someone’s personality long-term. As it happens, resilience in particular will lend itself to a person returning to their normal state after transient states (e.g. after the loss of a loved one, after an accident, after being active military and involved in live conflict.)
Our Advice to Businesses Interested in Using Personality TestsDiscrimination of all types, including age discrimination, is real and happens far more than is known by regulatory agencies. If you are a business dedicated to fair and ethical hiring practices, work diligently to understand the adverse impact of any of your hiring methodologies. With Resource Associates personality testing, there is no adverse impact on race / ethnicity, gender, or age. You can learn more about our research on this topic and see our validity studies by reading our PSI Manual.
Additional Resourceshttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC2562318/ https://www.researchgate.net/publication/49696299_Age_Differences_in_Personality_Traits_From_10_to_65_Big_Five_Domains_and_Facets_in_a_Large_Cross-Sectional_Sample https://www.verywell.com/attitudes-how-they-form-change-shape-behavior-2795897 http://www.livescience.com/12896-7-mind-body-aging.html
The Problem Some jobs require only a handful of repetitive tasks. That’s not a problem, but the person in the job can be. When an entire position exists for tasks to be performed repeatedly – and at a high level of quality – people who aren’t comfortable with repetition can cause BIG problems. It’s frustrating and costly to hire a worker who says they prefer repetitive tasks, but their work and their morale over time says different. (If you don’t notice it in their work, you’ll usually find it in their exit interview.) Employees who aren’t strong with repetitive tasks eventually start producing less, producing poor quality work, or leave the position altogether. Usually these things happen within the first year. The Cause Comfort with repetition is hard to discover in a resume or interview alone. Most past employers will not, or cannot, comment about it. Candidates who really want the job may be inclined to exaggerate abilities or their interest in repetitive tasks. Equally, candidates may have some experience with occasional repetitive tasks, but may not have experience with repetition at a level that your position requires. Although they’re enthusiastic at the start, they may fail in the long run, and the quality of their work – and your production – will suffer. Solutions Determining a good fit with a candidate who thrives performing repetitive tasks should start with:
- Looking beyond their work history. Do any position titles or duties indicate they may have performed repetitive tasks in the past? Did they leave those roles because of repetition, or lack thereof? Do they have absolutely no job titles indicating experience with repetitive work?
- Using pre-hire testing for the job. Find a test built specifically for the position or the type of work it entails. Do the results show that this person will be a good match for the role? Many pre-hire tests will also give suggested interview questions to ask the candidate which help give extra certainty to your hiring decision.
- Watching the candidate’s body language during the interview. Body language, especially in response to a specific question, tells more than the verbal reply. Do they keep eye contact locked on while giving their answer? Do they nod their head? They may be signaling comfort and honesty. Do they shift in their seat, pause and think, or are they over-eager in their response? They may have something to hide. Remember that many career coaching and recruiting services, from local agencies to com, teach candidates to use interview body language to their advantage. Don’t rely on this information alone.
- Making “working interviews” a policy. If your company can afford it – not just financially, but from a time and production standpoint – make working interviews a part of your hiring process. Sometimes also thought of similarly to temp work or “contingent employment”, you will have to do new hire paperwork for the employee and consult with your HR professionals on how to make it a policy. The cost benefit and rewards in determining if a candidate is right for the job, however, may make sense if the position tends to have high turnover.
- In a retail-clothing store, Resource Associates’ tests were able to identify superior salespeople who produced on average 500% more sales than the average worker.
- One of our clients that used the Resource Associates’ STAY (Still There After a Year) test generated a 50% reduction in turnover.
- A large convenience store chain achieved over 3000% ROI thanks to testing of their store managers.
- When a manufacturing company shut down for more than a year started up again, one of the changes put in place was Resource Associates’ testing for hiring. Once the plant was operational, the Plant Manager noted it was 22% more efficient — which he attributed largely to the higher performing workforce.
Big turnover costs big dollars. When employees stay, you save.
The ProblemAs a business owner or manager, very few things feel worse than constant turnover. It’s demoralizing to both you and your staff, and we’re all aware of the thousands of dollars it takes to replace each employee that leaves. Nothing derails the performance of an organization more than turnover. When key employees quit, a series of events get set in motion that are hard to stop. Organizational knowledge is lost, work gets harder, productivity suffers, customers notice, and profitability almost always suffers.
The CauseStudy after study shows that most turnover comes from employees that have been with the organization for 1 year or less. Interestingly enough, research shows these employees have 7 traits in common:
- 1. Their attitude towards work: It is casual, or are they serious about their work?
- 2. They are easily annoyed or offended
- 3. Their schedule/hours worked doesn’t match their needs
- 4. Their feelings about how closely they are supervised, and how closely that matches the job
- 5. They have a high-stress/turmoil personal life
- 6. They do not intend to stay at any position for any length of time
- 7. They are working for the paycheck, instead of the job
- 8. They make promises, but don’t really care if they keep them or not (employees who stay make promises but go above and beyond them – like how we gave you 8 traits when we promised you 7!)