Measure Overall ( N ⫽ 318) Remote sample ( n ⫽ 182) On-site sample ( n ⫽ 136) t 2 Mean ( SD ) age (years (21.4) 47.3 (18.4) 47.8 (24.9) ⫺ 0.18 ( df ⫽ 316) Age range (years 19–87 Socioeconomic status 3.1 2.9 2.00 ( df ⫽ 314) ⴱ Mean education (years 16.7 15.5 4.14 ( df ⫽ 309) ⴱⴱ Sex (% female 65.7 72.1 Hispanic (%) 4.4 4.4 Race (Caucasian 89.0 Asian or Pacific Islander 7.7 African American 1.6 Other 1.6 2.2 Note. Socioeconomic status was assessed via self-reports on a point Likert-type scale from 1 ⫽ lower income to 5 ⫽ upper income . Statistical tests compare remote and onsite sample. ⴱ p ⬍ .05. ⴱⴱ p ⬍ .01. Table 2 Participant Characteristics for Behavioral Task Subsample Measure Younger adults ( n ⫽ 60) Older adults ( n ⫽ 49) t 2 Mean ( SD ) age (years (6.6) 72.5 (8.3) ⫺
df ⫽ 107) ⴱⴱⴱ Age range (years Socioeconomic status 3.0 0.51 ( df ⫽ 107) Mean education (years 16.3 ⫺ 3.54 ( df ⫽ 103) ⴱⴱⴱ Sex (% female 73.5 Hispanic (%) 8.3 2.0 Race (Caucasian Asian or Pacific Islander African American Other 2.0 Note. Socioeconomic status was assessed via self-reports on a point Likert-type scale from 1 ⫽ lower income to 5 ⫽ upper income . Two participants (one older and one younger adult) who completed the decision task but did not provide demographic data were excluded from this table. ⴱⴱⴱ p ⬍ .005. 3 PREFERENCES FOR CHOICE ACROSS ADULTHOOD
consistency for accessibility ratings was acceptable (Cronbach’s alpha ⫽ .86). Maximizing was assessed via the item Maximization Scale (Cronbach’s alpha Schwartz et al., 2002), personality traits were screened via the item version of the Big Five Inventory (Rammstedt & John, 2007), need for cognition was measured using that subscale of the item version of the Rational Experiential Inventory (Cronbach’s alpha Pacini & Epstein, and future time perspective was measured via the item Future Time Perspective scale (Cronbach’s alpha) developed by
Lang and Carstensen (We measured participants cognitive abilities in terms of STM
(Digit Span test Wechsler, 1997), numeracy (item Numeracy Scale; Lipkus, Samsa, & Rimer, 2001), and vocabulary (Shipley Vocabulary subtest; Zachary, We also administered a single-item measure of the extent to which individuals believe that larger versus smaller choice sets are more likely to contain the optimal alternative (labeled below as optimal choice belief ). Participants responded to this item using a 7-point Likert-type scale (from 1 ⫽ strongly disagree to 7 ⫽ strongly agree ). A behavioral decision task comprised a computerized decision among 20 cars using a standard information grid (see Figure presented via E-Prime 2.0 experimental software. The cars were portrayed as hypothetical and labeled with names of rare birds (e.g., Pipit “Turaco,” and “Xenops”), but in reality they represented the 20 most common midsized sedans sold in the United States. The information grid contained information on the following six attributes for all 20 cars gas mileage, horsepower, turning radius, safety rating, comfort, and dependability. 2
Following standard practice (Mata & Nunes, 2010), each piece of information was contained in a separate cell within the grid, and all information was initially hidden from participants, who were instructed to use the computer mouse to click on a cell to reveal the corresponding information (see Figure 1). Each piece of information remained visible until the participant clicked on another cell, at which point the initial information would disappear. Thus, only one piece of information was visible at anytime, although participants were allowed to revisit any cell. Participants were allowed to view as much information as they desired, and were given unlimited time to search for information within the grid prior to selecting a car.
Prior to making the decision, each participant was given an information sheet providing details about the decision attributes. This helped address any interindividual differences in background knowledge of automobiles, and ensured that all participants were able to make an informed decision among the cars. The information sheet was modeled after buyers guides provided by consumer information Websites (e.g., Consumer Reports and Amazon.com) and contained explanations for each of six decision attributes. For instance, safety rating was defined as follows: The safety rating refers to the amount of protection provided by the car to its passengers during a crash. These ratings are provided by the National Highway Traffic Safety Administration, which tests all vehicles in terms of their crash safety and rates them from 1 (Worst) to 5 (Best). Procedure After providing informed consent, all participants completed a computerized survey containing the following measures in order: demographics, choice set size preferences, DMSE, memory self- efficacy, future time perspective, maximizing, need for cognition, personality, optimal choice beliefs, preference clarity, numeracy, vocabulary, and STM. Completion of the survey took approximately min. After finishing the survey and taking a min break, a subset of participants completed the behavioral decision task. Participants were informed via the computer program that they would be making a hypothetical decision about cars. 3 They were subsequently asked how many options they wished to choose from,
ranging from four to 20 options in increments of four (i.e., 4, 8, 12,
16, or 20). Participants were then provided with instructions regarding the decision task including the information sheet described above. All participants—independent of their reported choice set size preferences—then completed the decision task using the 20- option information grid and indicated their desired car. 4 After participants made their decisions, they were checked for suspicion and debriefed. Completion of the behavioral decision task took approximately 5–10 min. Results Exploratory data analyses indicated that many of the dependent measures, including choice set size preferences, were not normally distributed. Consequently, data were analyzed using nonparamet- ric tests when appropriate. The pattern of results (including the key association between choice set size preferences and age) did not differ significantly between participants who completed the survey remotely versus onsite ( p s ⬎ .05). Therefore, further analyses collapsed both participant groups into a combined sample. Furthermore, no significant associations were observed between choice set size preferences and any of the demographic variables besides age, so they are not discussed further.