The data are drawn from the comprehensive database of management buy–outs and buy–ins in the UK maintained by the Centre for Management Buy–out Research. The data held are drawn from a variety of sources including regular returns from the investing organisations.

The definition of the variables used in the analysis is as follows:

Syndication:1 if the deal is syndicated, 0 otherwise

Risk variables(all figures refer to the initial deal)Debt: total amount of debt

Mezzanine: total amount of mezzanine finance

Equity: total amount of equity

Gearing: (debt + mezzanine) / equity

Mgtshare: management’s share of equity

Totalmgt: total value of management equity

Total vendor: total value of equity held by the vendor

Vendor loan: value of loan provided by vendor

Value: total value of the deal

Rat: 1 if a ratchet is in effect, 0 otherwise

Buy–out: 1 if the deal is a buy–out; 0 if buy–in

Receiver: 1 if the company was acquired from the receiver, 0 otherwise

Divestment: 1 if the company was a divestment, 0 otherwise

Industry dummies: 1.0 dummies for major industrial groupings

Regional dummies: 1,0 dummies for regions (South East is in the constant)

Venture capital firm characteristics:Deals: number of buy–out or buy–in deals in which the firm was an equity investor, whether lead or otherwise

Funds managed: total value of funds under management in the year of the deal

Portfolio size: number of investments in the firm’s portfolio in the year of the deal

Dummy variables: separate 0.1 dummies for the 11 major venture capital firms

Captive: 1 if the firm is a captive, 0 otherwise

Semi–captive: 1 if the firm is a semi–captive, 0 otherwise.

Year dummies: 0,1dummies for the separate years from 1990 to 1995 inclusive (1989 is in the constant)

Full information for the 1,999 venture–backed buy–outs and buy–ins identified in the UK between 1989–95 is available for most of the variables included in the model and whether the deal is syndicated or not is known for every case. However, there are missing values for some of the explanatory variables, particularly those relating to financial measures which are not available for reasons of confidentiality.

Thus, the data set is subject to missing values of the independent variables (Little 1992, Robins et al, 1996). The traditional method of dealing with missing X values is to apply complete case analysis, i.e. to exclude all cases for which one or more of the variables is missing. The application of standard statistical analysis to these data depends on the missing values being missing completely at random (MCAR – Little and Rubin, 1987). However, it is clear in the present case that the remaining sample is not a random sub–set of the total data. In particular, the complete cases remaining after deletion of those with missing values have a much greater proportion of syndicated deals (54.7 %) than in the total sample ( 31.1% ). Hence, any estimates of the population parameters based on the complete cases will be biased.

Thus, as a minimum, any estimators of the population parameters using these complete cases should be based on a weighted analysis adjusting for the differences between the complete cases and the total sample. Results on this basis are supplied below. However, using this approach discards a large amount of information for which details are available since complete information for all cases is only available for 645 of the 1,999 cases. An alternative approach is to seek to impute the missing values using an efficient and consistent estimator. The Expectations – Maximisation (EM) algorithm first devised by Orchard and Woodbury (1972) has been applied using the EMCOV routines written by Graham and Hofer (1994). For an application to financial data see Warga (1992).

These routines assume a multivariate normal distribution, but it has been shown that these estimators remain efficient if there is full information on all the categorical variables (Little, 1992). It has also been shown through simulation that even in the presence of categorical variables with missing values, the multivariate assumption generally works well. With the present data, missing values apply generally to continuous variables and there is only one categorical variable (rat) which has any missing values.

The EM algorithm is appropriate as long as the missing values are missing at random (MAR), but they do not need to be MCAR (Little and Rubin, 1987). Thus, the observed values of the variables are not necessarily a random sub–sample of the total sample, but MAR is satisfied as long as they are a random sample of the sampled values within the sub–class defined by syndication in our case. There is no reason to suspect that the missing values within syndicated and non–syndicated deals are not a random sample within these groups. Hence, the EM algorithm is applicable.

Thus, EM is used to impute values for the cases where the value of the explanatory variable is unknown. There are, in fact, 50 different missing value patterns in the dataset, with 7.9% of the values being missing and the maximum number of missing values in any one case being 11 (17.7%). The EM algorithm imputes the missing values by OLS estimation using all other known variables for each case. However, since this imputed value exactly fits the regression line, such estimates contain too little variance. The EMCOV routines, therefore, add a random residual to each imputed value based on those derived in the estimating process. Further, the values of the imputed values are simply one of many samples that could be drawn. The process of multiple imputatation is, therefore, applied whereby new samples are bootstrapped from the original data set, processed by EMCOV and the coefficients and standard errors re–estimated. Simulation suggests that some 5–10 datasets should be derived in this manner (Schafer, 1997). In this paper, the original dataset and 10 bootstrapped samples were used. Finally, the estimates of the population coefficients and standard errors are derived by pooling the results from the original and bootstrapped samples.

Since the dependent variable only takes the values 0,1, the population coefficients and standard errors have been estimated using logistic regression on the full dataset including imputed values.

As will be seen below, the majority of syndications involve two or three partners only. Large syndications are not sufficiently common to permit analysis of the size of the syndicates by the techniques discussed here. Nevertheless, a further dimension of understanding the nature of syndication in the venture capital market concerns the density or inter–connectedness of networks. There are a variety of measures for analysing aspects of a network (Wasserman and Faust, 1994). For comparative purposes the three measures used by Bygrave (1988) have been employed here: (a)ó a measure of centrality – the number of other venture capital firms to which a specific one has direct links; (b)ó a measure of intensityó the sum of all the syndications involving a specific venture capital firm and all others; (c)ó a weighted measure of the strength of the connection on a scale of 0 to 1 (see Bygrave 1988 for details).

In addition to the econometric analysis, the authors sought the views of venture capital market participants on changes in syndication over the past five years and expectations for the future. This was achieved through a series of semi–structured interviews with the Chief Executives of 22 UK venture capital firms in May and June 1996. The CEOs were selected with the help of the BVCA as being recognised ‘lead steers’ covering the broad spectrum of the UK venture capital industry. Interviews lasted for between one and two and a half hours and were conducted on a confidential basis.

Top of page | Chapter Listing | Return to 1997 Topical Index |

¸ 1997 Babson College All Rights Reserved

Last Updated 03/19/98