Development of Compartmental Models in Stable-Isotope Experiments
Application to Lipid Metabolism
Jump to

Abstract
Abstract—Kinetic experiments are of great importance in lipid research because they further the understanding of lipid metabolism in vivo and help to explain the physiopathology of lipid disorders in humans. At present, due to species specificity, no valid animal model can efficiently replace a study in humans to explore lipid metabolism, and the use of radioactive tracers is restricted in humans. Thus, stable-isotope tracer kinetic studies have become an important component of research programs to achieve in humans a quantitative understanding of the dynamics of metabolic processes in vivo. The aim of this review is to describe the practical aspects of compartmental model development in stable-isotope experiments. The recent development of computer hardware and modeling software has dramatically facilitated the task of the modeler in his or her calculations. In the current review, we show that the model may be considered an integral component of the experimental design and that model development must obey strict rules to provide a rigorous solution. The main difficulties of model development in tracer experiments, such as experiment design, model identifiability, data expression, comparison of models, or tracer recycling, are presented with extensive references. We have paid particular attention to kinetic modeling in stable-isotope experiments because they have shown the greatest development in recent years.
- Received October 30, 1997.
- Accepted November 25, 1997.
In lipid research, kinetic experiments are a necessary step in understanding the processes leading to plasma lipoprotein alterations that have been observed in different physiopathological events. Indeed, the meaning of a variation in the concentration of a plasma lipoprotein fraction is quite different, depending on its mechanism, an altered production, an altered catabolism, or both. For example, only kinetic experiments have been able to demonstrate that the increased plasma VLDL apoB concentration in diabetes mellitus is explained by both an increased production and a decreased catabolism of this apolipoprotein1 or that decreased apoB production and increased LDL catabolism occur in patients with truncated apoB-75.2
Because of the specific features of human lipoprotein metabolism, in vivo studies in humans cannot be replaced by animal experiments. For ethical reasons, the use of radioactive tracers should be avoided or is in fact prohibited in humans. On the other hand, recent improvements in mass spectrometry and isotope ratio mass spectrometry3 have increased the sensitivity and the reliability of stable-isotope enrichment measurements. In consequence, stable-isotope tracer kinetic studies are now an important component of in vivo lipid research programs in humans.
The data provided by a tracer experiment contain more information than can be extracted by simple methods of analysis (eg, linear regression or area under a curve). Compartmental modeling is a powerful mathematical modeling approach that is widely used to obtain quantitative or predictive information about the dynamics of a system. Kinetic modeling has become greatly facilitated by recent improvements in computer hardware and modeling software.4 However, kinetic modeling in stable-isotope experiments differs from radioactive isotope kinetics in many respects, and rigorous rules for compartmental model development must be kept in mind, from the experiment design step until a valid model is achieved. This review presents the practical aspects of model development in stable-isotope experiments. The examples shown in this article were obtained with the widely used saam4 software, and the terminology used in this article is analogous to that used in this software.
Background
Aim of a Kinetic Experiment in Lipid Metabolism Studies: Link Between the Kinetic Experiment and the Mathematical Model
In lipid research, the aim of a kinetic experiment is to obtain informations about the dynamics of physiological processes, such as molecule transfers between lipoproteins (eg, cholesterol kinetics), or to calculate apolipoprotein production and catabolism. To study the behavior of an endogenous molecule, the tracee, the investigator introduces the same molecule, but labeled (the tracer), into the system (usually via the bloodstream). This process is called exogenous labeling.5 Endogenous labeling5 occurs in the situation wherein a labeled precursor of the molecule of interest is used to label this molecule (eg, infusion of a radiolabeled amino acid to label a protein). Ideally, the tracer can be detected by an observer, has the same kinetic behavior as the tracee, and does not perturb the system. Thus, the information provided by the tracer reflects the behavior of the tracee. At various times, the amount of tracer is quantified to provide a kinetic curve. Then a mathematical model is constructed to extract all of the information contained in the kinetic curve. This is possible because the structure of the model reflects the structure of the system under study. By fitting the model to the data, it is possible to calculate the parameters of the model that characterize the flux of molecules between kinetically homogeneous pools of molecules. For example, it is possible to calculate the flux of cholesterol between lipoproteins fractions,6 information that cannot be obtained by static measurements alone.
Stable-Isotope Tracers: Advantages of Their Use to Study Human Lipid Metabolism
Ideally, a tracer should have the same physical and chemical properties as the tracee and exactly reflect the tracee’s movements in vivo. The rate constants for chemical and physical processes should be the same. The tracer should be uniformly mixed with the tracee, and the number of labeled molecules introduced should not affect the state of the system. In fact, the rates of chemical and physical processes depend on the masses of the atoms involved. The difference in rate constants between the unlabeled and the isotopically labeled molecule is called the isotope effect.7 To reduce the total amount of tracer to be used without losing sensitivity, it has been suggested to combine isotope ratio mass spectrometry and uniformly labeled molecules. However, uniformly labeled tracers should be used with great care because they are potential inducers of a strong isotope effect. The combination of moderately labeled molecules along with high-precision mass spectrometry may be the best compromise.3
Stable-isotope tracers offer several advantages over radioactive isotopes.8 The use of labeled compounds with stable isotopes is safe in humans.9 In protein metabolic experiments, the use of amino acids labeled with stable isotopes allows simultaneous study of several proteins and avoids protein alterations induced by exogenous protein radiolabeling.10 For example, it is possible to simultaneously study apoB, apoA-I, and apoA-II kinetics by using [13C]leucine.3 Comparisons between radioactive and stable-isotope tracers have shown that both tracers lead, in general, to the same conclusions.11 12 13 Nevertheless, the two tracers differ in some aspects14 : in contrast to radioactive tracers, stable-isotope tracers have nonnegligible mass and are naturally present in the system, and the measured variable is a ratio of the two isotopic species. These features do not allow stable-isotopic tracer data analysis to use unmodified radioactive tracer approaches.
Data Expression in Stable-Isotope Experiments
The presumed analogy between the radioactive specific activity and stable-isotope enrichment has been shown to be incorrect14 15 and to result in significant errors in model systems when the dose of tracer is ≥10% of the pool size.16 The proper analog of specific activity is the tracer-to-tracee ratio z(t)14 : where e(t) is the enrichment of the sample at time t and ei is the enrichment of the infusate.
However, in some situations, a variable different from z(t) is required.14 17 In the estimation of protein fractional synthetic rate17 or in condensation biosynthesis,18 when the total flux Φ (tracer plus tracee) from precursor to product is a constant value and equal to the value before the tracer experiment, the variable required for calculating the kinetic parameters is the ratio of tracer mass to the (tracer plus tracee) masses:
In apolipoprotein kinetics, it is generally assumed that the tracee steady state is not perturbed by the tracer experiment. Thus, the tracer to tracee ratio is used to express the measurements.2 13 19 20 21
Principles of Compartmental Models
Definitions
A compartment defines a well-mixed and kinetically homogeneous amount of material. A compartment is a mathematical concept that does not necessarily correspond to a physiological space or a well-delimited physical volume. A compartmental system is a system made up of a finite number of compartments that interact by exchanging material. A compartmental model is a mathematical model whose equations describe the flux of material between a finite number of compartments.
Symbolic and Mathematical Representation of Compartmental Models
A compartment is generally represented by a circle or a box, and transfers of material are represented by arrows. Each arrow is labeled with the corresponding fractional transfer coefficient. Fractional transfer coefficients denoted k(i,j) or L(i,j) express the fraction of compartment j transferred to compartment i at time t. For example, k(i,j)=0.3 hour-1 means that 30% of compartment j is transferred to compartment i per hour. The flux of tracee in mass per unit of time from compartment j to compartment i at time t is FLUX(i,j)=k(i,j)×Qj, where Qj is the tracee mass in compartment j at time t. The flux of tracer in mass per unit of time from compartment j to compartment i at time t is flux(i,j)=k(i,j)×qj, where qj is the tracer mass in compartment j at time t.
In compartmental systems used in tracer experiments, the state variables of the system are the amounts of material in each compartment, and the changes in such systems are usually represented by differential equations.7 So compartmental analysis is a geometric way of representing a system of differential equations. With modeling software, it is possible to solve the differential equations and to fit the model to the data by using a weighted least-squares approach.22 A the end of the fitting process, numerical values for the fractional transfer coefficient k(i,j) are obtained. So fluxes of the tracee can be easily calculated, and a dynamic representation of the system under study is obtained.
A compartmental model is nonlinear if at least one fractional transfer coefficient is a function of the size of at least one compartment. A compartmental model is linear if all fractional transfer coefficient are either constants or functions of time only. In a steady-state tracer experiment, the compartmental model describing the experiment is linear with constant coefficients. The ability of a compartmental model to describe a system depends on some properties of the system. To be well approximated by the model, the system should be easily partitionable into amounts of material with exchanges between them, and the transfer rates between compartments should be negligible compared with the rates of mixing within the compartments.7
Compartmental Model Design in Stable-Isotope Experiments
Steady-State Experiments: Example of ApoB Kinetic Model Building
In this section, we assume that the tracer mass is not negligible with respect to the tracee mass, but we assume that the tracer is “ideal” in the sense that it is indistinguishable from the tracee and does not perturb the tracee constant steady state during the experiment. This means that the tracee masses Qi and the rate constants k(i,j) describing the tracer and the tracee system are constant. A test of the endogenous constant steady state has been proposed by Cobelli et al.14 If the tracee steady state has been perturbed, the tracee concentration C becomes a function of time C(t): with Ctot(t) denoting the measured concentration of the substance of interest (tracer+tracee) in the accessible pool and z(t) the corresponding tracer to tracee ratio.
In stable-isotope experiments two sets of differential equations are needed to describe the model, because two state variables, q and Q, appear in the measurement equation z(t)=q(t)/Q(t).14 15 One set of differential equations describes the movement of tracer through the model, and the other set describes the movement of tracee through the model.
As an example, in the Figure⇓, a three-compartment model based on information from Reference 2323 is shown. This model has been chosen to illustrate the development of differential equations in stable-isotope experiments because of its extreme simplicity. It represents the minimal model using stable isotopes for studying the metabolism of apoB-containing lipoproteins in humans. Compartments 2, 3, and 4 represent VLDL, IDL, and LDL, respectively. Compartment 1 represents the precursor pool of VLDL apoB-100. In practice, it is often necessary to add a delay compartment (see the section entitled Delays below) between compartments 1 and 2 to take into account the assembly time of VLDL.
Example of model design in stable-isotope experiments. A three-compartment model has been associated with a tracer and a tracer experiment to provide the correct set of differential equations for a stable-isotope experiment. Compartment 1 represents a forcing function; compartments 2, 3, and 4 are observational compartments; s1, s2, and s3 are associated with sample enrichment, expressed as the tracer-to-tracee ratio and corresponding to compartments 2, 3, and 4, respectively; and s4 and s5 are associated with the tracee mass of compartments 2 and 3, respectively. Pool size of compartment 4 was assumed to be known exactly and was fixed to the measured value.
Once a model is built, it is necessary to describe the experiment; ie, to indicate in which compartment(s) the tracer is introduced and in which compartment(s) the amounts of tracer and tracee are estimated. In the Figure⇑, we show that in stable-isotope compartmental modeling it is necessary to describe separately the fate of the tracer and the tracee through the model to provide a correct set of differential equations. That is why two separate “submodels” are represented in the Figure⇑ to describe the experiment: one model concerns the tracer and the other concerns the tracee. In saam a submodel used to describe the experiment is called an “experiment.” It is possible to build both tracer and tracee experiments. When one creates a tracer or a tracee experiment in the model, one has to specify the inputs (tracer introduction) and the samples (enrichment measurements and compartments masses). From the structure of the model and information about the input, the software internally constructs the system of differential equations represented by the model and the input. In our example, the differential equations describing the tracer movement through the model are as follows:
The tracer is introduced into compartment 2 via a forcing function q1.FF (see the section entitled Forcing Functions below) that represents the VLDL apoB-100 precursor pool tracer enrichment. The value of q1(t), the amount of tracer in compartment 1 at time t, is replaced by a function q1.FF(t) to “drive” the appearance of tracer in the model. The differential equations describing the tracee movement through the model are as follows:
Q2, Q3, and Q4 are measured from samples, and U(1) is the endogenous input of tracee in compartment 1. In our example Q4 (LDL apoB mass) was fixed, and Q2 (VLDL apoB mass) and Q3 (IDL apoB mass) were added to the analysis as weighted data.24 Indeed, we considered that LDL apoB mass was accurately known, whereas a nonnegligible error was associated with IDL and VLDL masses. So values for theses masses are calculated by the software during the fitting process. Associations of compartment masses with the data are represented in the Figure⇑ by bullets labeled si (sample number i).
The link between the two experiments, tracer and tracee, comes from the data that are expressed as the tracer-tracee ratio. So in the tracer experiment we have indicated that the tracer enrichment has been measured in VLDL (compartment 2, sample 1), IDL (compartment 3, sample 2), and LDL (compartment 4, sample 3). We have also indicated that the data are expressed in the tracer-to-tracee ratio as follows:
In the steady state, the fractional synthetic rate (FSR) equals the fractional catabolic rate (FCR).19 In our example of apolipoprotein kinetics, wherein a pool of apolipoproteins is represented by a single compartment, the FCR is the sum of transfer coefficients emerging from this compartment. When the pool of apolipoproteins is represented by more than one compartment, the FCR is the weighted sum, based on the mass of each compartment within that fraction, of the individual rate constants leaving that fraction.20
Non–Steady State Experiments: Example of LDL Apheresis and Intravenous Infusion of Triglyceride Emulsion
In the nonsteady state, the tracee masses Qi as well as the rate constants k(i,j) are time-varying quantities. One possible way of describing non–steady state experiments involving stable-isotope tracers with a compartmental model is to combine two tracer experiments. One describes movement of the tracer through the model and the other describes the movement of the tracee through the model.
Recently, Parhofer et al25 studied the effects of LDL apheresis on the metabolic parameters of apoB in a kinetic study based on a bolus injection of trideuterated leucine. To calculate non–steady state metabolic parameters, a multicompartmental model was used. In practice, the non–steady state condition was modeled with the use of saam II software as follows. Two tracer experiments were performed on the model: one described the movement of tracer through the model and the other described the movement of apoB through the model. The link between the two experiments originated from the data that were expressed as tracer-to-tracee ratios. The abrupt reduction in apoB mass in the LDL fraction after apheresis was modeled by using a change condition function in saam II.
A similar approach was adopted by Björkegren et al26 to describe the effects of an infusion of triglyceride emulsion on VLDL apoB-100 kinetic. In this study, three subjects underwent a simultaneous stable-isotope kinetic study and triglyceride emulsion infusion. [D3]Leucine was infused for 10 hours, and after 6 hours of [D3]leucine infusion, Intralipid was infused for 4 hours. Infusion of the emulsion caused a perturbation of the steady state. The model for VLDL1 and VLDL2 apoB-100 turnover consisted of two linked parallel systems: one explained the behavior of the tracer and the other described the behavior of tracee (apoB mass in VLDL1 and VLDL2). An instantaneous change in the values of the rate constants was obtained by using the time-interrupt system in saam software.
The two previous examples are simple situations in which no nonlinear fractional rate constants were required to fit the data. However, it is generally not the case in non–steady state experiments, and the estimation of nonlinearity is a difficult task. Therefore, whenever possible, experiments should be performed under steady-state conditions.
Choices to Be Made When Developing the Compartmental Model
Model Development
Model Structure: Compartmentalization
The compartmental model should provide a plausible description of the system being studied. At this step it is imperative to carefully examine the accumulated knowledge in the field to justify the use of a compartmental model and so to justify the use of a given model. As pointed out by Foster and Barrett,27 work done over the years with the use of radioactive isotopes must be taken into account when designing a stable-isotope experiment.
In simple cases, an estimation of the number of compartments can be obtained by fitting the curve to a sum of exponentials.7 Otherwise, model development may start with the simplest model based on the study design assumptions. The complexity of the model should be progressively increased by including more known physiological details until the optimal complexity is obtained.28 Statistical tests to compare models are indicated below.
A Priori Identifiability of the Model
A priori identifiability tests whether a mathematical model can provide unique solutions for unknown parameters from data collected in an experiment under the ideal condition of noise-free observations and error-free model structure. A model is a priori structurally identifiable if all of its parameters are uniquely identifiable and is nonidentifiable if at least one of its parameters is nonidentifiable. As pointed out by Cobelli and DiStefano,29 identifiability of all parameters does not generally imply a unique model. In this case the model is said to be identifiable but not uniquely so. For example, in Reference 2929 it is shown that two distinct three-compartmental models with the same number of parameters can fit a set of data equally well. When the model is a priori nonidentifiable, several strategies are available: (1) the model structure can be reconsidered, (2) constraints can be added,21 and (3) the experimental design can be modified to provide more information. Unidentifiability can arise with very simple two-or three-compartment models, so identifiability is truly a critical aspect of model development. Testing methods for identifiability have been reviewed and compared by Cobelli and DiStefano in Reference 2929 . A practical example can be obtained from Reference 2121 . Software is being developed to test a priori identifiability of linear compartmental models.30 A posteriori identifiability is related to both the statistical precision of parameter estimation and the goodness of fit, to be discussed in a later section of the current report.
Forcing Functions
Very often, the biologist or clinician is interested only in obtaining a given set of parameters from his or her study. In this case it is not necessary to build a very complex model that fully describes the behavior of the tracer. For example, if one studies apolipoprotein kinetics by using endogenous labeling with labeled leucine, it is not necessary to develop a complex model to describe whole-body leucine metabolism. Plasma leucine enrichment can be fitted by a function called a forcing function, and this function can be used directly as input in the model. Consequently, use of the forcing function takes into account the “recycling” of the tracer and minimizes its effects on the slow-turnover compartments.
Forcing functions are used to decouple a complex system. This is accomplished by forcing the contents of a specific compartment to equal a known function. The design of the forcing function is of great importance in model building. In complex model development, forcing functions can also be used to subdivide and fit different portions of the model.6 28 Fisher et al31 demonstrated in an apoB kinetic study following a bolus injection of labeled leucine that plasma leucine is not always a suitable forcing function in the model because the labeling of intracellular and extracellular leucine is not always similar.
Delays
Some physiological processes are not instantaneous and occur only after a time delay. For example, one VLDL particle is assembled in ≈30 minutes. If the investigator wants to determine the FSR of VLDL apoB, this time delay should be taken into account. It has been shown that neglecting time delays can cause significant errors in estimating the FSR.19 In saam software, delays are constructed internally by using two or more compartments. Increasing the number of compartments within the delay increases the resolution of the delay but also increases the computation time.
Experiment Design
Experiment Planning
Whenever possible, each experiment should be improved in the light of previous ones. Thus, on average, a smaller number of experiments will be required if they are performed sequentially than if they are performed simultaneously.32 The experiments should be planed to facilitate estimation of the parameters. For example, to estimate a slow component of the model, it is necessary to perform a long-term experiment.5 33 On the contrary, to study a molecule with a fast turnover time, short-term experiments with frequent sampling are required.
It should be pointed out that in long-term experiments with endogenous labeling, tracer recycling can significantly affect the shape of kinetic curves, especially for pools that turn over slowly.5 Tracer recycling may be the major drawback of long-term experiments, although models that include a recycling loop have been reported.31 However, this kind of model must be used with great care. Indeed, it is very difficult to accurately estimate the true contribution of recycling. This problem is especially crucial in low-turnover protein studies, like those for apoA-I or apoA-II. On one hand, a long-term study of these proteins might be compromised because of tracer recycling; on the other, a short-term experiment would not exactly reflect the metabolism of these proteins. To our knowledge, a satisfactory solution does not exist. So in both cases, the results should be examined with great care.
Sampling Protocol
Selection of the sample time can have a significant effect on parameter estimate precision. An optimal sampling schedule is the one in which the maximum precision of model parameter estimates is achieved.34 35 36 It also allows the investigator to optimize the cost of the experiment and to save time by reducing the number of samples. Optimal sampling schedules are not suitable for testing model adequacy or for discriminating high-order models because the number of sample times is too low. Therefore, it is important to keep in mind that optimal sampling schedules are applicable in vivo for validated models only in terms of both structure and measurement error description. Software programs to compute optimal designs have been reported by Cobelli et al.37 When the optimal sampling schedule cannot be used, an alternative method for choosing data sampling time is to use simulation. A set of data can be obtained by adding random noise to the points calculated by the model. After removing some data points, the set of data can then be fitted to test the effect of sampling times on parameter accuracy and precision. In practice, data points are especially useful in regions of the graph where changes in the slope of the curve are important.
Data Weighting
Models are fitted to the data by using a weighted least-squares approach, so that both the value and the weight of the data are taken into account in the calculations. In saam modeling software, the weights are estimated from the data error as follows, with the data error variance approximated by for use with real data,38 in which yobs(ti) represents a data point at time ti and α, β, and γ are parameters that can be estimated from the data. The weight wi of datum i is given by
Data can be weighted by using absolute weights or relative weights. When absolute weights are used, v̂ is assumed to be equal to 1; when relative weights are used, v̂ is unknown and is estimated for each data set. When at least two data sets are used, relative data weighting allows automated optimization of each data set weight. Data weighting has a great impact on the model-fitting process. Thus, when the error structure is not known exactly, use of relative weights is recommended.
Data Scaling
The scaling of the data is very important because computers keep track of only a certain number of significant digits. Thus, severe rounding-off errors can be minimized by optimized data scaling that avoids very large or very small numbers. In stable-isotope kinetics, data can be expressed as the tracer-to-tracee ratio in percent. In this case, the correct expression for sample si is si=100×qi/Qi. For complex models as well, in which the amount of tracer differs markedly between compartments, the model can be subdivided and the different portions fitted separately (see the section on Forcing Functions).
Initial Estimates of the Parameters
The best fit of the data and the correct parameter values are obtained when the weighted residual sum of squares (WRSS) between observed and calculated values reach a global minimum: where yobs(ti) and y(ti) are the observed and fitted values, respectively at time ti; n is the number of data points; and wi is the weight assigned to yobs at time ti.
When an investigator fits a model to experimental data, the software adjusts the parameter values within high and low limits specified by the user until the best fit between the data and the associated calculated values is obtained. A poor selection of initial values can cause the search to drift indefinitely without finding a minimum or to converge on the wrong solution, ie, to become “trapped” in a local minimum.22 These problems can also arise from an improper choice of weights or poor numerical identifiability. In any case, poorly selected initial values lead to an increase of computer time to reach the solution. So before fitting the data, a reasonable fit should be obtained by adjusting the values of the parameters manually. There are no general guidelines, but whenever possible, initial estimates of the parameters should be based on previous experience or previous knowledge in the field.
Validation of the Completed Model
Validation assesses whether or not the postulated model is adequate for its intended purpose. An extensive discussion about validity and validation of simple and complex models can be obtained from Reference 3939 . We have focused here on the numerical and statistical criteria involved in the assessment of models wherein formal identification techniques can be adopted.
Validation of Simple Models
Assessing the Goodness of Fit
The scatter of observed data points about the theoretical curve should be randomly distributed. The residuals should not be systematically related to the x values. This can be tested by using the runs test.40 22 A run is a series of consecutive points with a residual of the same sign, positive or negative. The expected number of runs is calculated as where n+ is the number of data points above the fitted curve and n− the number below the curve. The variance of the number of runs is
when both n+ and n− are >10, then the quantity
is approximately distributed as N(0,1), and R is the number of actual runs. For a significance level of α=5%, the z value should fall within the interval [−1.96,1.96]. As pointed out by Bard,32 failure to pass the runs test is no reason for outright rejection of the model. In particular, when the data are very accurate, neglected effects outweigh random errors in measurements.
Estimation of the Parameters
Once a good fit of the data is obtained, the values and the statistical certainties of the model parameters should be carefully inspected. When a large fractional SD is associated with some parameters, either more experimental observations should be added or some compartments should be removed.41 The values of the parameters should be significantly different from zero. To build a model of β-carotene metabolism, Novotny et al42 tested parameter values against zero by using a single-tailed, one-sample, Student’s t statistic. In this study, compartments associated with statistically nonsignificant transfer coefficients were included in other compartments. The statistical relationship among parameters should also be checked by examining the correlation matrix. The occurrence of large correlation coefficients (>0.85) for certain combinations of parameters (multicolinearity) indicates that various combinations of parameter values will fit the data equally well.43 If the parameter values are well determined and compatible with the accumulated knowledge in the field and if the residuals are acceptable, then the model may be consistent.
Comparison of Models
More than one model can provide a good fit of the data and a correct parameter estimation. If the models have the same number of adjustable parameters, then the model that provides the lowest WRSS is superior. Comparing models with different numbers of adjustable parameters is less straightforward because increasing the number of parameters reduces the WRSS, but at the same time, it may unnecessarily complicate the model. To test whether or not the WRSSs have been sufficiently reduced to justify the choice of the model with additional parameters, the F test may be used.38 The F ratio is defined by where the subscript 1 refers to the more simple model with fewer parameters, P represents the number of adjustable parameters, and N is the number of data points. A P value is obtained from the F value by consulting a table with (P2−P1) and (N−P2) degrees of freedom.
The Akaike information criterion (AIC)38 44 and the Schwartz criterion (SC)45 are also commonly used to compare two or more models. The model with the smallest criterion is the best. The formula to calculate the criteria depend on data weighting:
Absolute weights
where P represents the number of adjustable parameters and N the number of data points.
Relative weights
where P represents the number of adjustable parameters and N the number of data points. If the fit of the model provides residuals that are randomly distributed and if increasing the number of parameters does not significantly reduce the WRSSs, then the model may be consistent but not unique because more than one model can fit the data equally well.29
Validation of Complex Models
Complex models are those in which not all of the unknown parameters can be estimated by formal identification techniques.39 Such models are generally of high order, with a small number of variables accessible to direct measurement. The approach to validate complex models can be obtained from Cobelli et al.39
Validity of the Model
Validity of the model may be assessed by both internal and external criteria.39 Internal criteria are mathematical criteria. The model must contain no conceptual errors, and the algorithm for simulation or fitting should lead to accurate solutions with acceptable round-off errors. External criteria refer to purpose, theory, and data.39 Theoretical rules, such as mass conservation, which correspond to physical and chemical laws, should be respected in the model. The model should be consistent with the experimental data available and provide an accurate estimation of the parameters. Last, it should provide useful information for the practical situation that interests the biologist or clinician.
Conclusions
Experimental design and mathematical processing of the data are fundamental toward achieving a quantitative understanding of lipid metabolism dynamics. Compartmental modeling and stable-isotope tracers are powerful tools for the in vivo investigation of lipid and lipoprotein metabolism in humans. In lipid research, very important work has been done over the years by using both radioactive and stable isotopes. Many characteristics of apolipoprotein and cholesterol kinetic models are now well established. Hence, when an experiment is designed, it should take advantage of the accumulated knowledge in this field. The biologist or the clinician who performs a kinetic experiment should always bear in mind that the final step of data analysis is mathematical modeling and thus, should consider the kinetic model an integral component of the experimental design. Before starting any experiment the investigator must determine whether the experiment is properly designed to provide an accurate estimation of the kinetic model parameters. To solve this problem, it is necessary to know the basic mechanisms of a compartmental model. Very often, the data can be described by more than one model. To be sure that a given model is the best choice, it is necessary to carefully examine the fitting of the kinetic curve, to know the precision of the parameter estimates, and to perform statistical tests to compare results obtained with different models. In this review, we have reported solutions to methodological difficulties and have provided references for the more advanced modeling tools.
Acknowledgments
This investigation was supported by the Université de Bourgogne, the Conseil Régional de Bourgogne FP, the Institut National de la Santé et de la Recherche Médicale (INSERM), and Parke Davis France FP. We thank Dr C. Lallemant for useful suggestions during the development of the manuscript.
References
- ↵
Kissebah AH, Alfarsi S, Evans DJ, Adams PW. Integrated regulation of very low density lipoprotein triglyceride and apolipoprotein-B kinetics in non-insulin-dependent diabetes mellitus. Diabetes. 1982;31:217–225.
- ↵
- ↵
- ↵
saam II User Guide. Seattle, Wash: University of Washington, SAAM Institute; 1997.
- ↵
- ↵
- ↵
Jacquez JA. Compartmental Analysis in Biology and Medicine. Amsterdam, Netherlands: Elsevier; 1972.
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
Cobelli C, Toffolo G, Foster DM. Tracer-to-tracee ratio for analysis of stable isotope tracer data: link with radioactive kinetic formalism. Am J Physiol. 1992;262:E968–E975.
- ↵
Cobelli C, Toffolo G, Bier DM, Nosadini R. Models to interpret kinetic data in stable isotope tracer studies. Am J Physiol. 1987;253:E551–E564.
- ↵
- ↵
Toffolo G, Foster DM, Cobelli C. Estimation of protein fractional synthetic rate from tracer data. Am J Physiol. 1993;264:E128–E135.
- ↵
Kelleher JK, Masterson TM. Model equations for condensation biosynthesis using stable isotopes and radioisotopes. Am J Physiol. 1992;262:E118–E125.
- ↵
- ↵
- ↵
Demant T, Packard CJ, Demmelmair H, Stewart P, Bedynek A, Bedford D, Seidel D, Shepherd J. Sensitive methods to study human apolipoprotein B metabolism using stable isotope-labeled amino acids. Am J Physiol. 1996;270:E1022–E1036.
- ↵
- ↵
- ↵
Welty FK, Lichtenstein AH, Barrett PHR, Dolnikowski GG, Ordovas JM, Schaefer EJ. Decreased production and increased catabolism of apolipoprotein B-100 in apolipoprotein B-67/B-100 heterozygotes. Arterioscler Thromb Vasc Biol. 1997;17:881–888.
- ↵
- ↵
Björkegren J, Packard CJ, Hamsten A, Bedford D, Caslake M, Foster L, Shepherd J, Stewart P, Karpe F. Accumulation of large very low density lipoprotein in plasma during intravenous infusion of a chylomicron-like triglyceride emulsion reflects competition for a common lipolytic pathway. J Lipid Res. 1996;37:76–86.
- ↵
- ↵
- ↵
Cobelli C, DiStefano JJ. Parameter and structural identifiability concepts and ambiguities: a critical review and analysis. Am J Physiol. 1980;239:R7–R24.
- ↵
Saccomani MP, Audoly S, D’Angio L, Sattier R, Cobelli C. PRIDE: a program to test a priori identifiability of linear compartmental models. In: Blanke M, Soderstrom T, eds. Proceedings of SYSID 1994, 10th IFAC Symposium on System Identification. Copenhagen, Denmark: Danish Automation Society; 1994;3:13–18.
- ↵
- ↵
Bard Y. Nonlinear Parameter Estimation. New York, NY: Academic Press; 1974.
- ↵
Goodman DWS, Noble RP, Dell RB. Three-pool model of the long term turnover of plasma cholesterol in man. J Lipid Res. 1973;14:178–188.
- ↵
- ↵
- ↵
Cobelli C, Ruggeri A. Optimal design of sampling schedules for studying glucose kinetics with tracers. Am J Physiol. 1989;257:E444–E450.
- ↵
- ↵
Landaw EM, DiStefano JJ. Multiexponential, multicompartmental, and noncompartmental modeling, II: data analysis and statistical considerations. Am J Physiol. 1984;246:R665–R677.
- ↵
- ↵
Sokal RR, Rohlf FJ. Biometry, the Principles and Practice of Statistics in Biological Research. 3rd ed. New York, NY: WH Freeman Co; 1995:797–803.
- ↵
Novotny JA, Zech LA, Furr HC, Dueker SR, Clifford AJ. Mathematical modeling in nutrition: constructing a physiologic compartmental model of the dynamics of β-carotene metabolism. Adv Nutr Res. 1996;40:25–54.
- ↵
- ↵
- ↵
- ↵
This Issue
Jump to
Article Tools
- Development of Compartmental Models in Stable-Isotope ExperimentsFrédéric Pont, Laurence Duvillard, Bruno Vergès and Philippe GambertArteriosclerosis, Thrombosis, and Vascular Biology. 1998;18:853-860, originally published June 1, 1998https://doi.org/10.1161/01.ATV.18.6.853
Citation Manager Formats








