METHODOLOGY

The hypotheses propose that VCs do not have a strong understanding of their decision process. Therefore, the current design captures each VC's actual decision process and then compares it to the VC's stated decision process (which is basically how the VCs believe they make decisions). Each participant is provided with several pieces of information about 50 potential investments. Regression analysis of the VC's 50 decisions captures the actual decision policy. In addition, the VCs provide a weighting scheme of how they believe they used the information by splitting 100 points among each presented information factor. The VC's weighting scheme can be formulated into a regression equation of the VC's believed decision policy (or for the purposes of this experiment, the VC's stated decision policy). The actual decision policy and stated decision policy can then be compared to assess the VC's understanding. The following paragraphs further detail the methodology.

Decision Experiment. The experiment is administered via a notebook PC which is brought to the VC's office; such convenience likely increases participation. Unlike the majority, of past research on VCs which use ex post interviews, the policy capturing methodology elicits the VC's decision policy real time. As such, this methodology eliminates the threat of recall and rationalization biases (Barr, et al., 1992; Sandberg, et al., 1988). Furthermore, policy capturing methods are a logical extension to real time verbal protocols of VCs (Hall & Hofer, 1993; Sandberg, et al., 1988; Zacharakis & Meyer, 1995). Whereas verbal protocols give deep understanding of the decision process (especially the timing and order of information review), policy capturing experiments provide a deeper understanding of the various information cues' relative importance to the decision and a measure of how well VCs understand their process. Additionally, policy capturing experiments enable greater control and are conducive to quantitative statistical tests (i.e. Regression and ANOVA). Finally, the experiment captures the VC's "real" decision policies (versus espoused policies -- Hitt & Tyler, 1991) without relying on the VC's conscious efforts to accurately introspect about his/her own decision process.

Sample. The sample for this experiment is 53 practicing VCs from two entrepreneurial "hotbeds", (1) the Colorado Front Range (primarily the Denver/Boulder metro area) and (2) the silicon Valley in California. Two of the 53 participants were removed (one because the PC crashed during the exercise and the other because he did not wish to continue past the first few decisions).

Procedure. The experiment follows a two step creation process; (1) identifying information cues that are valuable to the investment decision and (2) creating decision cases for the VC to judge. The number of cases and cues is interrelated. The ideal number of cases for policy capturing experiments is a function of the number of cues and the time required for participants to complete the experiment (Stewart, 1988). The more cases each participant completes, the higher the validity of that person's judgment policy. Unfortunately, too many cases may tire the judge and limit the likelihood of participation. Stewart (1988) suggests that 35 cases is typically sufficient to accurately capture the subject's decision policy. Another rule of thumb is to have a minimum of five cases for every cue that is being tested (Stewart, 1991). In sum, a properly designed policy capturing experiment requires an adequate ratio of cases to cues.

Clearly, a policy capturing exercise that uses all the identified information factors from previous research (approximately 25 cues in aggregate form) would be untenable. First evaluating that much information for each case would be burdensome for the participating VCs. Second, to achieve an appropriate case-to-cue ratio, the VCs would have to evaluate over 100 cases (5 cases/cue * 25 cues=125). Evaluating 100 cases would increase the required time for each participating VC to complete the exercise which might tire each participant thereby reducing the experiment's validity. Additionally, such an increased time requirement might discourage VCs from participating in the exercise altogether. Third, it is likely that several of the identified cues are highly correlated with each other. High multicolinearity adversely affects policy capturing methodology (Stewart, 1988). For the above three reasons, the number of cues used in the exercise in a subset of all possible cues.

In order to use a manageable set of cues, cue frequency across studies and reported importance of each cue within each study is used as a criteria for a particular cue's inclusion in the experiment. Additionally, cues that are highly correlated with other cues are removed (Lewis, Patton & Green, 1988) retaining the cue that is deemed most important in the literature as well as verified by a consulting expert VC. Although the list is not exhaustive, it must be remembered that decision makers typically use three to seven cues (Stewart, 1988). Moreover, experts typically identify far more cues than they actually use (Stewart, 1988). Therefore, it is more probable that the identified cues in this study include unimportant factors rather than exclude important factors.

Each decision cue is given a value range that allows it to be compared across cases (Stewart, 1988). When possible, concrete values are used (e.g. market size), but purely representative distributions are appropriate for subjective cues. Even concrete cues should be presented relative to other cases (Stewart, 1988). Since this experiment uses actual business plans to develop the cases, a uniform coding system allows consistent coding across business plans.

Once the actual cases were collected, the lead researcher pulled information factors from each actual business plan. Although there is a potential threat that the information included in the plan is inaccurate (which would carry over into the experiment), Roure and Keeley (1990) find that VCs rarely had to make "intense" corrections. Thus, it is reasonable to assume that the business plans are accurate. To verify inter-judge reliability, A second individual also coded all appropriate cues for two of the actual business plans. Overall inter-judge reliability equates to 87.5 percent. Berelson (1952) reports that inter-judge reliability typically ranges from 66 percent to 95 percent. As such, the coding is deemed fairly accurate.

The final design allows the VCs to use four to eight cues (depending upon the treatment) and judge fifty cases. The participants are divided into three groups (see Table 1). Group one uses

the information cues associated with a base cognitive model as derived from literature (see Table 1). The cues are from studies (primarily Tyebjee & Bruno, 1984; MacMillan et al., 1985, 1987; Robinson, 1987; Timmons, et al., 1987) which rely on post hoc methods such as interviews and surveys. Thus, these studies basically rely on introspection by the VC as to what are the most important decision factors. In essence, these cited cues correspond to the cognitive side of the lens model (see Figure 1). Group two is provided with more cues than either the first or third groups to assess whether more information changes the decision process (see Table 1). Specifically, group two cues include the five used by group one VCs plus three more commonly cited cues from the literature. Group three uses the information factors that best distinguish between successful and failed new ventures; these cues correspond to the task side of the lens model (see Figure 1). The current study uses task cues derived by Roure and Keeley (1990). Although thirty-five to forty cases would have been sufficient, the additional ten cases further increase the experiment's validity.

Top of Page
Previous Page | Main Page | Next Page


1997 Babson College All Rights Reserved
Last Updated 4/2/97 by Cheryl Ann Lopez

To sign-up for the Center for Entrepreneurial Studies' publication lists,
please register with the
Entrepreneurship WebTeam.