Fact-checked by Grok 2 weeks ago

Optimal experimental design

Optimal experimental design is a branch of statistics that focuses on the selection of experimental conditions—such as the number of trials, factor levels, and allocation of resources—to maximize the and of inferences drawn from the , often by minimizing the in parameter estimates or predictions under an assumed model. This methodology constructs designs that are tailored to specific objectives, using optimization algorithms to evaluate trade-offs between cost, feasibility, and statistical . The foundations of optimal experimental design trace back to the early , with Danish statistician Kirstine Smith introducing the concept in her 1918 paper, where she derived designs that minimize the variance of estimated coefficients in models. This work laid the groundwork for more systematic approaches, later advanced by Ronald A. Fisher in his 1935 book , which emphasized and blocking but also influenced optimal criteria through its focus on efficient . Subsequent developments in the mid-20th century formalized the use of information matrices to quantify design efficiency, while computational advances since the 1990s have enabled practical implementation via algorithms like coordinate exchange and genetic optimization. Central to optimal experimental design are various optimality criteria, each targeting different aspects of statistical efficiency based on the Fisher information matrix, which measures the amount of information the data provide about model parameters. A-optimality minimizes the average variance of the parameter estimates by reducing the trace of the inverse information matrix, making it suitable for overall precision in regression settings. D-optimality minimizes the determinant of the inverse information matrix (or equivalently, maximizes the determinant of the information matrix), which shrinks the volume of the confidence ellipsoid for the parameters and is widely used for its balance of efficiency and computational tractability. E-optimality maximizes the smallest eigenvalue of the information matrix, thereby minimizing the maximum variance among linear combinations of parameters and enhancing robustness against the worst-case estimation errors. Other criteria, such as I-optimality for prediction variance and G-optimality for average prediction error, extend these principles to specific inferential goals like model validation or forecasting. Optimal experimental designs find broad applications across disciplines, particularly in for process optimization and , where they reduce the number of required experiments while maintaining high precision. In , they support dose-response studies and programs by efficiently estimating treatment effects under resource constraints. Emerging uses include for , environmental modeling, and for behavioral experiments, where adaptive and Bayesian variants allow real-time adjustments to evolving . Despite their advantages, challenges remain in handling model misspecification and nonlinear systems, often addressed through robust or sequential design strategies.

Fundamentals

Definition and principles

Optimal experimental design refers to the systematic selection of experimental conditions, or design points, to maximize the of estimates in a while adhering to practical constraints, such as a limited number of observations or resource availability. This approach formalizes the planning of experiments as an , where the goal is to choose inputs that optimize a functional of the information matrix derived from the model, ensuring efficient use of experimental resources. In seminal works, this is framed as a over the space of probability measures on the experimental domain, allowing for both exact designs with discrete allocations and approximate designs that treat weights as continuous probabilities. Central to optimal design is the assumption of an underlying with parameters \theta, where the observations provide information about \theta through the . The matrix, which quantifies the amount of information the data carry about \theta, plays a pivotal role in this optimization, as the design seeks to maximize its desirable properties under the given model. Key principles guiding this process include enhancing the efficiency of estimation by concentrating observations where they contribute most to and maximizing the statistical for tasks such as testing. These principles ensure that the design not only yields accurate point estimates but also supports reliable and decision-making. A foundational setup often involves linear regression models of the form \mathbf{y} = \mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon}, where \mathbf{y} is the vector of observed responses, \mathbf{X} is the design matrix determined by the choice of experimental conditions, \boldsymbol{\beta} is the vector of unknown parameters (playing the role of \theta), and \boldsymbol{\varepsilon} represents random errors with mean zero and constant variance. The design influences \mathbf{X}, thereby affecting the covariance structure of the least-squares estimator for \boldsymbol{\beta}, and optimal selection aims to make this covariance as "small" as possible in a scalarized sense. In notation, an approximate design \xi is a probability measure defined over the space of possible experimental conditions \mathbf{x}, typically supported on a finite set of discrete points \mathbf{x}_i with associated weights w_i \geq 0 summing to 1, such that \xi = \sum w_i \delta_{\mathbf{x}_i} where \delta is the . This representation allows the information from the design to be summarized via the expected information matrix M(\xi) = \int f(\mathbf{x}) f(\mathbf{x})' \, d\xi(\mathbf{x}), with f(\mathbf{x}) denoting the regressor for the model at \mathbf{x}. For practical implementation, exact designs replicate points according to rounded weights, maintaining the focus on efficiency under fixed run sizes.

Advantages over traditional designs

Optimal experimental designs offer significant efficiency gains over traditional ad-hoc or uniform designs by minimizing the variance of parameter estimates for a fixed number of experimental runs, resulting in narrower confidence intervals and more precise inferences. In linear models, this is achieved by strategically allocating design points to maximize the information matrix, such as concentrating observations at the extremes of the factor range to maximize the spread in the predictor variable, which directly reduces the variance of the slope estimator compared to evenly spaced points that underutilize the design space. For instance, in simple linear regression, placing half the runs at each endpoint yields the lowest possible variance for the slope, outperforming uniform spacing by better leveraging the full range of the factor, as demonstrated in factorial design applications where coded levels at ±1 enhance estimation precision. These designs also lead to substantial cost savings, particularly in resource-intensive settings like clinical trials or tests, by requiring fewer runs to achieve equivalent precision levels. In clinical contexts, D-optimal accounting for dropouts can reduce the number of required time points and adjust sample allocations, yielding up to 19% cost reductions while maintaining statistical efficiency; for example, redesigning an Alzheimer’s disease trial with optimized assessment days (42, 285, 356, 364) for a five-time-point setup achieves about 19% cost savings at the original total sample size of 144, or a four-time-point (days 42, 318, 364) allows increasing the total sample size to 172 with adjusted arm allocations (e.g., 72 and 100 ) under a fixed without compromising . This tailoring to practical constraints, such as and dropout rates, contrasts with rigid traditional designs that often overestimate required resources and inflate costs. Furthermore, optimal designs enhance statistical power for testing by focusing resources on informative regions of the design space, thereby increasing the ability to detect true effects. This stems from the maximization of the matrix, which directly improves the sensitivity of tests compared to diffuse uniform designs that spread observations inefficiently. In settings, such as estimating treatment slopes in dose-response studies, this focused allocation leads to higher power for detecting parameter significance, with simulations showing D-optimal approaches outperforming standard designs by up to 19% in efficiency metrics.

Statistical Theory

Minimizing estimator variance

In optimal experimental design, the selection of experimental conditions, encoded in the design measure ξ, aims to minimize the variance-covariance matrix of the parameter \hat{θ} for an underlying parameterized by θ. This minimization enhances the of inferences about θ by allocating resources—such as trial locations, sample sizes, or replication counts—to maximize the extracted from the data. The theoretical foundation rests on the matrix I(ξ, θ), which quantifies the expected about θ provided by observations under design ξ; optimal designs seek to choose ξ that yields a "large" I(ξ, θ) in an appropriate matrix sense, thereby reducing uncertainty. For linear models of the form y = X β + ε, where X is the shaped by ξ, the best linear unbiased estimator (BLUE) \hat{β} has variance-covariance σ² (X^T )^{-1}, with σ² denoting the error variance and W a known positive definite weight reflecting heteroscedasticity or design replications. The design ξ influences this by determining the support points and weights in X, as well as the structure of W; for instance, in homoscedastic cases with W = I, the focus shifts to optimizing X^T X to shrink the generalized variances of individual components of \hat{β}. This formulation underscores how ξ controls the eigenvalues of the information M(ξ) = X^T W X, directly impacting the scale and orientation of the confidence ellipsoid for β. In nonlinear or generalized linear models, the maximum likelihood estimator \hat{θ} is asymptotically normal with variance-covariance matrix approximately [n I(ξ, θ)]^{-1}, where n is the total sample size and I(ξ, θ) is the average Fisher information per observation, given by I(ξ, θ) = ∫ f(x, θ)^T V(x, θ)^{-1} f(x, θ) ξ(dx) with f(x, θ) the mean function derivative and V(x, θ) the variance function. This asymptotic expression reveals that, for fixed n, the design ξ optimizes precision by maximizing I(ξ, θ) in a suitable sense, while scaling n inversely reduces variances proportionally across parameters. Sensitivity functions further elucidate the role of individual design points in variance minimization, measuring the marginal benefit of perturbing ξ toward a specific point x on the overall . Defined as the Gâteaux of a functional of I(ξ, θ) with to infinitesimal changes at x, these functions ψ(x, ξ) = (I(ξ, θ)^{-1} \frac{\partial I}{\partial \xi}(x)) or similar forms quantify local ; points where ψ(x, ξ) exceeds a indicate opportunities for improvement by shifting mass to those locations. Evaluating such functions guides iterative refinement of ξ to achieve efficient variance control.

Optimality criteria

In optimal experimental design, optimality criteria are scalar functions applied to the matrix I(\xi) to evaluate and select designs \xi that provide the most efficient parameter estimation. These criteria quantify desirable properties of the of the parameter estimator \operatorname{Var}(\hat{\theta}) = I(\xi)^{-1}, assuming asymptotic under the model. Common criteria focus on minimizing aspects of this variance, such as its , average, or worst-case magnitude. D-optimality maximizes the determinant of the information , \det I(\xi), which is equivalent to minimizing \det \operatorname{Var}(\hat{\theta}). This criterion minimizes the volume of the confidence ellipsoid for the parameter \theta, providing a balanced reduction in uncertainty across all parameters. The concept was formalized for models by Kiefer in his seminal work on optimum designs. A-optimality minimizes the trace of the information , \operatorname{trace} I(\xi)^{-1}, corresponding to the average variance of the individual parameter estimators. This approach prioritizes an overall reduction in the sum of variances, making it suitable when uniform precision across parameters is desired. It traces its roots to early efficiency considerations in and is extensively analyzed in comprehensive treatments of optimal . E-optimality maximizes the smallest eigenvalue of the information matrix, \lambda_{\min} I(\xi), thereby minimizing the largest variance among normalized parameter directions and enhancing worst-case precision. This criterion ensures robustness against the direction of highest in the parameter space. It was introduced in the context of locally optimal designs for parameter estimation by Chernoff. Other criteria include linear optimality, which minimizes \operatorname{trace}(c^T I(\xi)^{-1} c) for a specific vector c, targeting the variance of a of parameters relevant to the inferential goal. Ratio criteria, such as D_s-optimality, maximize \det I_s(\xi) for a of s parameters, focusing on a of interest while marginalizing others. These extensions allow tailored optimization beyond global measures. All standard optimality criteria, including D-, A-, and E-optimality, are concave functions of the design measure \xi over the design space, ensuring the existence of a unique maximum and facilitating techniques for design construction. This concavity property underpins the theoretical framework for verifying and computing optimal designs.

Contrasts between criteria

Different optimality criteria in experimental design exhibit distinct trade-offs that influence their suitability for various scenarios. D-optimality seeks to minimize the generalized variance across all parameters by maximizing the of the , providing a balanced approach for multiparameter ; however, it may lead to inflated variances for individual parameters if the design emphasizes overall at the expense of specific directions. In contrast, A-optimality minimizes the average variance by targeting the of the , offering equitable across parameters but potentially overlooking correlations between them, which can result in suboptimal performance when parameter dependencies are strong. E-optimality, by maximizing the minimum eigenvalue of the , prioritizes protection against the weakest directions, enhancing stability in ill-conditioned models, though it may compromise overall information content by focusing narrowly on the worst-case variance. Geometrically, these criteria correspond to different aspects of the confidence ellipsoid defined by the inverse information matrix. D-optimality minimizes the volume of this ellipsoid, ensuring compact joint uncertainty for all parameters. A-optimality reduces the average radius or spread of the ellipsoid, promoting uniform shrinkage across dimensions. E-optimality targets the minimum width by elongating the design to counter the longest axis, thereby safeguarding against extreme uncertainties in sensitive directions. Application scenarios highlight these differences: D-optimality is preferred for multiparameter models requiring global precision, such as in with multiple predictors. E-optimality proves advantageous for stability in ill-conditioned designs, like those involving near-singular information matrices in dose-response studies. For hypothesis-specific contrasts, such as treatment effects in clinical trials, custom linear criteria (c-optimality) minimize the variance of targeted linear combinations, allowing tailored focus beyond standard alphabetic measures. To address multi-objective needs, compromise criteria blend these properties; for instance, when primary interest lies in a subset of parameters s, modified D_s-optimality maximizes \det(I + I_s), where I incorporates information and I_s is the submatrix of the information matrix for s, balancing subset precision with overall model support. Such approaches, including weighted combinations like D^\alpha A^{1-\alpha}, enable flexible trade-offs between volume minimization and average variance reduction.

Design Construction

Exact and approximate designs

In optimal experimental design, approximate designs are represented as probability measures \xi on the design space, where \xi assigns non-negative weights summing to 1 to a of support points, allowing of design criteria such as D-optimality. These designs are optimized by solving problems, often leveraging theorems that characterize optimality conditions, such as the Kiefer-Wolfowitz , which equates the maximization of the of the to the minimization of the maximum variance over the . The resulting \xi provides a theoretical , as it relaxes the constraints of finite sample sizes and permits fractional allocations that may not be directly implementable. Exact designs, in contrast, consist of discrete allocations of a fixed number n of experimental runs to specific points, with weights ensuring feasibility in practice. These are typically constructed by starting from an optimal approximate \xi and its weights to the nearest s that sum to n, or by direct optimization using combinatorial methods that account for the constraints. A common approach for exact optimization is the coordinate-exchange algorithm, which iteratively improves the by swapping individual run assignments to neighboring candidate points, evaluating the criterion at each step until convergence. This yields designs that are precisely tailored to the sample size n, though they often require more computational effort than approximate counterparts. Approximate designs serve as relaxed solutions to the design problem, offering lower bounds on achievable and guiding the search for designs, but they cannot be executed directly due to non- weights. designs, while implementable, may incur a slight loss—typically small for large n—because the prevents perfect replication of the approximate optimum, leading to marginally higher variances. For instance, in D-optimality under a y = \beta_0 + \beta_1 x + \epsilon over the [-1, 1], the approximate places equal weight $1/2 at the endpoints x = -1 and x = 1, maximizing the of the matrix. The corresponding , for even n, allocates n/2 runs to each endpoint, achieving near-identical .

Computational algorithms

Computational algorithms play a crucial role in constructing optimal experimental designs, particularly when analytical solutions are unavailable, by leveraging optimization techniques to search over design spaces. These methods address both approximate designs, represented as probability measures over continuous spaces, and designs, which assign specific numbers of trials to points. Common approaches include searches for efficiency in large spaces and exact methods for smaller, problems, often incorporating optimality criteria such as D-optimality to maximize the of the information . Exchange algorithms, introduced by Fedorov, are widely used for constructing approximate D-optimal designs through iterative point swaps. Starting from an initial design measure \xi, the algorithm identifies candidate points x in the design space that maximize a sensitivity function, then exchanges mass between existing support points and these candidates to improve the criterion value, repeating until convergence. This process exploits the equivalence theorem, ensuring optimality when the maximum sensitivity equals the criterion's dimension. Modifications, such as random ordering of candidates, enhance computational efficiency for larger problems. For nonlinear models, where the information matrix depends on unknown parameters, gradient-based methods iteratively update the by following the of the optimality with to design weights or points. These approaches rely on functions d(\xi, x) = \frac{\partial \phi(\xi)}{\partial \xi}(x), where \phi is the (e.g., \phi(\xi) = \log \det M(\xi) for D-optimality, yielding d(\xi, x) = \trace(M(\xi)^{-1} F(x)) with F(x) the model's contribution at x). Local optimization techniques, such as , adjust support points and weights by solving subproblems that maximize d(\xi, x) or minimize directional derivatives, often requiring multiple starts to avoid local optima due to non-convexity. Seminal implementations demonstrate convergence to locally optimal designs for compartmental models and . Branch-and-bound algorithms provide exact for small discrete spaces by systematically enumerating subsets while pruning branches using lower and upper bounds on the criterion. The method builds a where nodes represent partial designs, computing relaxations (e.g., convex hulls of feasible information matrices) to bound subproblem optima; branches exceeding the current best solution are discarded. This approach guarantees optimality for - and related criteria in problems with up to dozens of points, though exponential complexity limits scalability. Early applications focused on and designs, establishing it as a benchmark for verifying results. Handling constraints such as blocking, replication limits, or budgets integrates mixed-integer programming (MIP) formulations to enforce trial assignments while optimizing the . For instance, designs with sizes are modeled by introducing variables for point selection within blocks and linear constraints on totals, transforming the nonlinear (e.g., via semidefinite relaxations for D-optimality) into a solvable MIP. This enables exact solutions for constrained exact designs, with solvers like Gurobi handling up to moderate sizes; for blocking in multi-arm trials, it ensures balanced allocation across factors. Recent advancements use for tighter bounds, improving solvability for nonlinear objectives.

Discretization of continuous designs

Discretization involves converting continuous approximate designs, which assign fractional weights to design points as probability measures, into practical designs consisting of numbers of replications at selected points. This step is essential because approximate designs optimize theoretical criteria but cannot be directly implemented in experiments requiring a finite number of runs. The process aims to select a set of points from the continuous design and assign non-negative weights that sum to the total number of experiments n, while closely approximating the optimality of the continuous design. Rounding procedures provide a primary for this . Simple entails taking the fractional weights from the approximate and them to the nearest integers, which can result in designs that deviate substantially from optimality if the fractions are uneven or the is large. To mitigate this, optimal employs formulations that minimize the difference in the design criterion between the discrete and approximate designs, ensuring the resulting exact design retains high efficiency. For instance, mixed-integer linear or can solve for integer weights that maximize the of the matrix or other criteria, subject to the summation constraint. The loss from is quantified using variance factors, which measure the increase in variance relative to the continuous . For D-optimality, this is often expressed as the ratio \left[ \det(M(\xi_\text{approx})) / \det(M(\xi_\text{exact})) \right]^{1/p}, where M is the information matrix and p is the number of parameters; values close to 1 indicate minimal . Theoretical results show that for large n, the relative approaches 1, meaning discrete designs can nearly achieve the asymptotic optimality of their continuous counterparts, with typically less than 5-10% for moderate n. Specific algorithms enhance the rounding process for particular cases. For balanced designs, where uniform replication across points is desirable, multidimensional sum-up rounding algorithms iteratively adjust weights to satisfy constraints while preserving and criterion value, offering polynomial-time approximations with guaranteed bounds. In scenarios with constraints, such as nonlinear models or restricted supports, algorithms explore the discrete design space by probabilistically accepting suboptimal moves to escape local optima, converging to near-optimal exact designs. These methods are particularly effective when becomes computationally intractable due to high dimensionality. Key challenges in include ensuring the integer weights are non-negative and sum precisely to n, which can lead to infeasible solutions if the approximate weights are poorly distributed, and addressing in settings, where to a of combinations may confound effects unless the is chosen to minimize resolution loss. These issues are exacerbated in small-sample designs, necessitating robust algorithms that incorporate regularization or relaxation techniques.

Practical Applications

Model dependence and robustness

Optimal experimental designs are inherently dependent on the assumed , as the optimality criteria rely on properties like the matrix, which varies with the model structure and parameters. If the true data-generating process differs from the assumed model, the design may lead to ed parameter estimates or inefficient variance reduction. For instance, a D-optimal design for a model places support points at the extremes of the design region to estimate higher-order coefficients accurately, but if the true relationship is linear, this design can introduce unnecessary variance in the linear term estimates and potential if the model is misspecified by omitting interactions. In , higher-degree models fitted to data from a lower-degree process exemplify , where the design allocates points to capture spurious curvature, resulting in inflated variance and poor predictive performance outside the observed range. To address this model dependence, robust criteria have been developed that seek designs performing well across a range of possible models or parameters. Maximin designs optimize the worst-case performance by solving \xi^* = \arg\max_{\xi \in \Xi} \min_{\theta \in \Theta} \phi(\xi, \theta), where \phi is an efficiency measure like D-efficiency, ensuring the design is not overly sensitive to parameter misspecification. Alternatively, Bayesian robust designs average the criterion over a distribution \pi(\theta), such as the expected log-determinant \int \log \det(F(\theta, \xi)) \, \pi(\theta) \, d\theta, which incorporates uncertainty in \theta to produce designs less vulnerable to local optima under the wrong model. Sensitivity analysis further evaluates model dependence by perturbing assumed parameters or model structures and measuring the impact on the design value. For example, small changes in the nominal \theta_0 can shift the optimal design points significantly in nonlinear models, with the quantified via the of the with respect to \theta. This approach reveals designs that maintain , such as those where the function remains bounded across perturbations. Lin-Yang robustness extends these ideas through a framework that explicitly trades off from model misspecification against estimator variance, minimizing the maximum over a class of plausible models. This , which balances the induced by incorrect functional forms with the variance from the assumed model, yields designs that are efficient even when the true model includes unmodeled terms, as demonstrated in applications to with potential higher-order effects.

Criterion selection and flexibility

The selection of an optimality criterion in experimental design is guided by the primary objectives of the study, such as exploration of the parameter space or precise of specific effects. For instance, D-optimality, which maximizes the determinant of the information matrix, is often preferred for exploratory purposes where broad coverage of the design space is needed to minimize overall parameter variance. In contrast, linear or c-optimality criteria are suitable for targeted inference on particular linear combinations of parameters, such as contrasts between treatment effects. Additionally, the dimensionality of the design space influences the choice; in high-dimensional settings, criteria like E-optimality, focusing on the minimum eigenvalue, may be favored to ensure robustness against ill-conditioned matrices. These guidelines help align the design with the experiment's goals while considering computational feasibility in complex models. Flexible criteria allow adaptation to multiple objectives by forming weighted combinations of standard functionals, expressed as \psi(\xi) = \sum w_i \phi_i(\xi), where w_i \geq 0 are weights summing to 1, and \phi_i are individual criteria like A- or D-optimality applied to the information matrix M(\xi). Such compound criteria enable compromise between conflicting goals, such as balancing estimation precision and prediction accuracy, and are optimized using equivalence theorems that characterize optimality conditions. The mathematical framework treats these criteria within , where the set of \phi-optimal designs forms a compact subset of the design space, facilitating efficient computation via algorithms like the Fedorov exchange method. This approach, rooted in Kiefer's theory, ensures that the resulting designs maintain desirable properties like continuity and . For scenarios involving multiple incompatible criteria, compromise designs are generated through , yielding a that represents the trade-offs between objectives without a single dominant solution. Points on the are non-dominated designs, where improving one criterion (e.g., reducing A-variance) worsens another (e.g., increasing prediction error), allowing experimenters to select based on priorities or resource constraints. These fronts are typically approximated using scalarization techniques, such as weighted sums, or evolutionary algorithms, providing a visual and quantitative basis for in applications like . Seminal work in this area emphasizes the efficiency gains, with Pareto designs often achieving 80-90% of single-criterion performance across objectives. Standardization of criteria enhances comparability across different models or parameter scales by normalizing variances relative to parameter magnitudes, often using the to weight elements of the . Dette's standardized criteria, for example, minimize functions of standardized covariances, leading to designs with balanced efficiencies for all regardless of scale differences. This is particularly useful in nonlinear models where parameter units vary, ensuring that the design prioritizes relative over absolute variance. Such methods promote fair evaluation of design quality, with efficiencies computed as ratios to designs for cross-model .

Handling model uncertainty

In optimal experimental design, model uncertainty arises when the true underlying model form or parameter structure is unknown, potentially leading to inefficient designs if a single model is assumed. Approaches to handle this include techniques to identify candidate models prior to design, followed by tailored criteria for , as well as probabilistic methods that incorporate uncertainty directly into the design process. These strategies aim to balance exploration of model alternatives with efficient estimation, ensuring robustness across possible model specifications. Recent work (as of 2025) explores generalized Bayesian approaches for robust experimental design in complex, high-dimensional systems. Model selection often begins with pre-design tests using criteria like the (AIC) to evaluate and choose among candidate models based on prior data or simulations. The AIC balances model fit and complexity by penalizing , with lower values indicating better candidates for subsequent design; for instance, it has been applied to select between linear and forms in experiments. Once candidates are identified, designs for discrimination, such as T-optimality, are constructed to maximize the sensitivity of the experiment to differences between models, particularly for testing parameter subsets. T-optimality minimizes the integrated between fitted model predictions under competing hypotheses, proving effective in distinguishing polynomial models of varying degrees, as demonstrated in robust constructions for scenarios. Bayesian experimental design addresses model uncertainty by maximizing an expected utility function that averages over possible outcomes and parameter values. A common formulation is to maximize the expected Shannon information gain, given by U(\xi) = \int p(y|\xi) \int \pi(\theta|y,\xi) \log \frac{\pi(\theta|y,\xi)}{\pi(\theta)} d\theta \, dy, which quantifies the expected reduction in uncertainty about the parameters as the mutual information between parameters and observations. This is equivalent to the expected Kullback-Leibler (KL) divergence between prior and posterior distributions, promoting designs that maximize information about model parameters under uncertainty. For example, in pharmacokinetic studies, KL-based designs have improved inference for uncertain dose-response models by prioritizing informative sampling points. To handle non-informative scenarios in Bayesian design, reference priors such as the or Bernardo's reference prior are employed, providing objective starting points that maximize the expected information from the experiment without favoring specific parameter values. The , proportional to the square root of the determinant, ensures invariance under reparameterization and has been used in nonlinear models to derive D-optimal designs robust to misspecification. Reference priors extend this by sequentially conditioning on nuisance parameters, yielding asymptotically optimal posteriors; for instance, in multiparameter , they facilitate designs that prioritize parameters of interest while marginalizing others. These priors are particularly valuable in early-stage experiments where substantive knowledge is limited. Model averaging integrates over a class of candidate models by weighting designs according to posterior model probabilities \pi(M), often via Bayesian model averaging (BMA). In BMA, the overall design criterion combines utilities from each model, such as averaged D-optimality, to produce a composite design robust to model choice; this has been shown to reduce prediction error in non-nested scenarios, like comparing linear versus nonlinear dynamics in . For example, BMA-weighted designs have enhanced nitrogen rate optimization in by averaging over competing crop response models, improving economic outcomes under and structural . This approach avoids over-reliance on a single model, yielding more reliable inferences across the model ensemble.

Advanced Methods

Iterative and sequential experimentation

Iterative and sequential experimentation in optimal design refers to procedures where the design is dynamically updated as data are collected, enabling adaptation to emerging insights and improving overall efficiency. Unlike fixed designs planned entirely in advance, these methods allow for adjustments to experimental conditions, such as input levels or allocation of resources, based on observed outcomes. This adaptability is particularly valuable in settings where model assumptions may evolve or initial information is limited. Seminal work by Fedorov established algorithms for sequentially constructing D-optimal designs by iteratively adding points that maximize the determinant of the information matrix, ensuring convergence to the optimal measure ξ. Sequential designs incorporate adaptive allocation rules to target specific objectives, such as estimating efficiently. The up-and-down method, for instance, adjusts the next experimental dose or level based on whether the current response meets or exceeds a , creating a that concentrates observations around the target , such as the (LD50) in bioassays. This approach minimizes the number of trials needed compared to fixed grid searches while providing unbiased estimates with controlled variance. Additionally, stopping rules are integrated using sequential probability ratio tests, which compare the likelihood ratio of competing hypotheses after each observation to determine if can terminate early without compromising rates. The process unfolds in iteration cycles: an initial design ξ₀ is chosen, often based on knowledge or a conservative ; after collecting data at those points, an interim analysis updates the parameter estimates or posterior; the design is then re-optimized to select the next set of conditions that best reduce in the updated model. In Bayesian settings, this re-optimization maximizes expected , such as preposterior , over the posterior . These cycles continue until a predefined stopping is met, such as a target level. Such methods offer substantial benefits over fixed designs, particularly in nonlinear models where the information matrix depends on unknown parameters, making upfront optimization unreliable. Sequential approaches can achieve comparable or superior with fewer total observations by focusing efforts where information gain is highest. They also handle drifting models, like those with time-varying parameters, by continuously recalibrating, thus maintaining robustness in dynamic environments such as chemical processes or biological systems. A prominent application is in dose-finding clinical trials for new therapies, where patient safety demands adaptive dosing. The continual reassessment exemplifies this: starting with a on the dose-toxicity , each patient's response updates the posterior, and the next dose is selected to best estimate the maximum tolerated dose (MTD), typically the level with a target toxicity probability of around 0.33. This sequential strategy has demonstrated higher accuracy in MTD identification and ethical advantages by avoiding overly toxic or ineffective doses, outperforming traditional escalation rules in simulations and practice.

Response surface methodology

Response surface methodology (RSM) is an iterative statistical approach developed for modeling and optimizing processes where the response exhibits curvature, typically through sequential experimental designs that build upon initial first-order models to refine second-order approximations. Introduced by Box and Wilson in , RSM employs models to approximate the true response surface near the region of interest, enabling efficient exploration and optimization without exhaustive experimentation. This methodology integrates principles of sequential experimentation by starting with screening designs and progressing to more detailed mappings of the response landscape. A key initial step in RSM is the method of steepest ascent, which uses a first-order model fitted to data from a factorial or fractional factorial design to identify the direction of maximum expected increase in the response. The steepest ascent path is determined by moving along a vector proportional to the estimated regression coefficients, conducting confirmatory experiments at intervals until the response plateaus or decreases, signaling proximity to the optimum. Once near the suspected optimum, the process shifts to second-order designs to capture curvature, as first-order models inadequately represent nonlinear relationships. This transition ensures that subsequent experiments focus on a refined region where quadratic effects dominate. Central composite designs (CCD) are widely used in RSM for second-order modeling, consisting of a factorial portion at the corners of a hypercube, axial points along the axes at a specified distance from the center, and center points for replication and estimation of pure error. These designs achieve rotatability—a property ensuring uniform prediction variance at points equidistant from the design center—when the axial distance parameter α is chosen appropriately, such as α = √k for k factors in a face-centered or rotatable configuration. Developed by Box and Hunter in 1957, rotatable CCDs provide efficient estimation of quadratic coefficients while minimizing the number of runs required. The iterative nature of RSM involves fitting a provisional model to current , using it to predict the response and guide the next phase, such as relocating the experimental region via steepest ascent or optimizing directly from a second-order fit. This sequential strategy aligns with optimal criteria by incorporating D-optimality or other measures to select points that reduce estimate variances in subsequent iterations. In applications, particularly process optimization in chemical and , RSM commonly employs models of the form \mathbf{y} = \beta_0 + \mathbf{x}^T \boldsymbol{\beta} + \mathbf{x}^T \mathbf{B} \mathbf{x} + \boldsymbol{\epsilon}, where \mathbf{y} is the response vector, \mathbf{x} the factor vector, \boldsymbol{\beta} the linear coefficients, \mathbf{B} the symmetric quadratic matrix, and \boldsymbol{\epsilon} the error term, to identify optimal operating conditions like yield maximization in manufacturing.

System identification techniques

In system identification, optimal experimental design focuses on selecting inputs and measurement strategies that yield the most informative data for estimating dynamic system models, particularly in or black-box settings where and prevail. This subfield emphasizes techniques that adapt to evolving data to minimize parameter estimation variance while respecting practical constraints like feedback loops or time-series dependencies. Key approaches include for recursive optimization, tailored input signals for parametric models, and dual control for integrated estimation and regulation. Stochastic approximation methods, such as the Robbins–Monro algorithm, provide a robust framework for root-finding in noisy environments, which is essential for iteratively refining experimental designs in system identification. Introduced in 1951, the algorithm updates an estimate \xi_t of the root through the recursive formula \xi_{t+1} = \xi_t + a_t (y_t - g(\xi_t)), where a_t > 0 is a decreasing step size satisfying \sum a_t = \infty and \sum a_t^2 < \infty, y_t is a noisy observation of the function value, and g is a continuous, monotone function whose root is sought. Almost sure convergence to the true root is guaranteed under mild conditions on the noise and function properties, enabling its use in black-box model optimization where direct evaluation is infeasible. In optimal experimental design, this procedure supports adaptive parameter estimation by treating design criteria, like minimizing prediction error, as stochastic optimization problems, with applications in recursive identification of nonlinear dynamics. For input signal design in system identification, D-optimal criteria are widely applied to ARMAX models, which capture autoregressive, , and exogenous input effects in time-series data. These designs select inputs that maximize the of the matrix, thereby minimizing the volume of the confidence ellipsoid for parameter estimates and ensuring efficient identification of transfer functions. Optimal signals typically excite the system across relevant frequencies, such as through power spectral density shaping, to reveal dynamic modes without excessive energy input; for example, periodic or multisine sequences are tuned to align with model poles and zeros. This approach enhances in linear time-invariant systems by reducing parameter , as demonstrated in control-oriented identification where input constraints like limits are incorporated. Dual control extends optimal design principles to feedback systems by simultaneously optimizing for control performance and parameter identification, addressing the between immediate and long-term model improvement. Formulated as a problem, it quantifies the dual effects of inputs: "cautious" actions that hedge against and "probing" actions that deliberately excite the to reduce it. Pioneered in the early , this method computes policies that minimize a combined cost of output and variance, often using approximations like certainty equivalence for tractability in high-dimensional settings. In practice, dual control is vital for adaptive systems where poor initial models could destabilize operations, enabling balanced experimentation in real-time environments. These techniques find application in adaptive designs for chemical processes, where stochastic approximation and D-optimal inputs optimize reactor experiments to identify kinetic parameters with minimal trials, improving process under . In , optimal designs for time-series , such as those balancing input persistence and , enhance of dynamic economic models like vector autoregressions from observational . Sequential can further refine these designs as new arrives, ensuring ongoing robustness.

Historical Development

Early foundations

The foundations of optimal experimental design trace back to the late 18th and early 19th centuries, when probabilistic methods for parameter estimation began to highlight the importance of strategies. Pierre-Simon Laplace's development of in works from 1774 onward provided an early framework for inferring causes from observed effects. Similarly, Carl Friedrich Gauss's 1809 formulation of the method for linear parameter estimation demonstrated that certain arrangements minimize estimate variance, implying choices to achieve best linear unbiased estimates under the Gauss-Markov theorem, without assuming distributions. These precursors emphasized variance reduction but lacked explicit optimization for experimental layouts. A key milestone came in 1918 when Danish statistician Kirstine Smith published the seminal paper introducing optimal experimental designs for polynomial regression models. Working under Karl Pearson, Smith derived designs that minimize the variance of estimated coefficients, establishing the core principles of selecting experimental points to optimize statistical efficiency. In the early 20th century, Ronald A. Fisher's agricultural experiments at Rothamsted Experimental Station during the 1920s introduced key principles like randomization, replication, and blocking to mitigate bias and variability in field trials, forming the basis for controlled experimentation though without formal optimality criteria. Fisher's 1925 book Statistical Methods for Research Workers further promoted these techniques for efficient inference in biological contexts. Concurrently, Jerzy Neyman's work in the 1930s advanced efficiency concepts, including uniformly most powerful tests via the Neyman-Pearson lemma (1933), which selected procedures minimizing variance for estimators or maximizing power for tests, paving the way for design choices that optimize statistical performance. Neyman's emphasis on purposive sampling and efficient estimation extended to experimental settings, highlighting the trade-offs in resource allocation. A pivotal advancement occurred in the 1940s with Abraham Wald's sequential analysis, developed amid applications for inspection sampling, which enabled adaptive designs stopping data collection when evidence suffices, thereby minimizing expected sample sizes while controlling error rates. Wald's (1945) laid groundwork for modern adaptive experimentation by integrating with ongoing observations. Earlier pre-optimality ideas emerged in 1930s bioassays, where uniform designs standardized dose allocations to ensure even coverage of response surfaces, as seen in protocols for serum potency titration that balanced precision and simplicity in quantal response models. These approaches anticipated later optimality by prioritizing designs that uniformly distribute experimental effort to reduce estimation uncertainty.

Key advancements and contributors

In the mid-20th century, foundational advancements in optimal experimental design emerged, particularly through the development of (RSM) by and K. B. Wilson in 1951, which provided a sequential approach to exploring and optimizing response surfaces using quadratic models fitted to designed experiments. This method emphasized steepest ascent techniques to efficiently navigate parameter spaces in industrial processes. Concurrently, in 1960, Jack Kiefer and Jacob Wolfowitz introduced the equivalence theorem, establishing a critical link between D-optimality (maximizing the of the information matrix) and G-optimality (minimizing the maximum variance of predicted responses) for approximate designs, enabling sensitivity function-based characterizations of optimality. The 1970s saw further methodological progress with Valerii V. Fedorov's 1972 exchange algorithm, a coordinate-exchange procedure that iteratively swaps points to construct exact D-optimal designs from candidate sets, proving highly efficient for discrete problems. Anthony C. Atkinson advanced robustness considerations during this decade, notably through joint work with Fedorov on T-optimal designs for , which prioritize experiments that maximize the sensitivity to differences between rival models while maintaining efficiency. Atkinson's contributions extended to compound criteria that balance multiple objectives, enhancing resilience to model misspecification. The Bayesian paradigm gained prominence in the 1990s, with Kathryn Chaloner and Isabella Verdinelli's 1995 framework formalizing optimal designs via expected utility maximization over prior distributions, accommodating nonlinear models and parameter uncertainty through decision-theoretic principles. Building on this, in the 2000s, Kenneth J. Ryan developed approaches to model averaging in Bayesian optimal design, integrating posterior model probabilities to create designs robust to uncertainty across multiple candidate models, as demonstrated in applications to . More recent developments from the 2000s onward have integrated computational tools, such as () solvers, to tackle complex approximate design problems by formulating optimality criteria as tasks solvable via . In the 2010s and beyond, hybrid methods combining techniques—like Gaussian processes or neural networks—with traditional have enabled scalable optimal designs for highly nonlinear models, improving efficiency in high-dimensional spaces through surrogate-based optimization. Key contributors include Fedorov, whose algorithmic innovations underpin much of modern design construction; Atkinson, recognized for bridging optimality criteria with practical robustness; and Friedrich Pukelsheim, whose 1993 comprehensive theory elucidated the convexity of design spaces and c-optimal criteria, providing a unified geometric foundation for exact and approximate designs.

References

  1. [1]
    [PDF] Overview of Optimal Experimental Design and a Survey of Its ...
    Optimal Design of Experiments is currently recognized as the modern dominant approach to planning experiments in industrial engineering and manufacturing ...
  2. [2]
    Optimal experiment design - IOPscience
    Optimal experiment design is the definition of the conditions under which an experiment is to be conducted in order to maximize the accuracy with which the ...
  3. [3]
    Optimality Criteria for the Design of 2-Color Microarray Studies - PMC
    Some classical design optimality criteria are A-, D-, and E-optimality (Atkinson and Donev, 1992). The A-optimal design minimizes trace((X′X)−1). This has the ...Missing: sources | Show results with:sources
  4. [4]
    Designing optimal behavioral experiments using machine learning
    The first step in setting up the experimental design optimization is to define our scientific goal. Although there may be many different goals for an experiment ...
  5. [5]
    Optimal experimental design: from design point to design region
    Jun 21, 2025 · The optimal design consists of design points with a maximal amount of information and thus lead to more precise models than statistical designs.
  6. [6]
    (PDF) Optimal experimental design - Academia.edu
    Optimal experimental design. Valerii Fedorov*. After a short historical introduction, the properties and numerical methods are the focal point of discussion.Missing: seminal | Show results with:seminal
  7. [7]
  8. [8]
  9. [9]
    D‐optimal designs for multiarm trials with dropouts - Lee - 2019
    Mar 25, 2019 · A key feature of an optimal experimental design is its cost efficiency. For a fixed trial budget, an optimal design will provide the largest ...
  10. [10]
    Linear Least-Squares Fitting — GSL 2.8 documentation - GNU.org
    The p -by- p variance-covariance matrix of the model parameters cov is set to \sigma^2 (X^T X)^{-1} , where \sigma is the standard deviation of the fit ...<|control11|><|separator|>
  11. [11]
  12. [12]
    None
    Below is a merged summary of the contrasts between A, D, and E optimality criteria in optimal experimental design, consolidating all information from the provided segments into a dense and comprehensive response. To maximize clarity and detail, I will use a table in CSV format to summarize key aspects (goals, trade-offs, geometric interpretations, use cases, etc.), followed by additional text for supplementary details like compromise criteria, seminal references, and URLs. This ensures all information is retained while maintaining readability.
  13. [13]
  14. [14]
  15. [15]
    The Coordinate-Exchange Algorithm for Constructing Exact Optimal
    We describe the cyclic coordinate-exchange algorithm for constructing D-optimal and linear-optimal experimental designs. The algorithm uses a variant of the ...
  16. [16]
  17. [17]
  18. [18]
    Branch-and-Bound Search for Experimental Designs Based on D ...
    Mar 23, 2012 · This article presents a branch-and-bound algorithm that constructs a catalog of all D-optimal n-point designs for specified design region, ...
  19. [19]
    A branch-and-bound algorithm for the exact optimal experimental ...
    Aug 27, 2021 · The main goal of this paper is to describe a Branch-and-Bound procedure for the Exact Optimal Experimental Design Problem given in (3) for any ...
  20. [20]
    Mixed-integer linear programming for computing optimal ... - arXiv
    May 27, 2023 · We show that this problem can be solved via mixed-integer linear programming (MILP) for a wide class of optimality criteria, including the criteria of A-, I-, ...
  21. [21]
    Computing exact D-optimal designs by mixed integer second-order ...
    We show that in many cases the mixed integer second-order cone programming approach allows us to find a provably optimal exact design, while the standard ...
  22. [22]
    Exact Optimal Designs of Experiments for Factorial Models via ...
    We propose mixed-integer semidefinite programming formulations, to find exact D-, A- and I-optimal designs for linear models, and locally optimal designs for ...
  23. [23]
    [PDF] Robust optimal designs using a model misspecification term
    Jan 6, 2023 · The results confirm that the optimal design depends strongly on the sample size. In low- information situations, traditional optimal designs ...
  24. [24]
    [PDF] Review of Optimal Bayes Designs - Purdue Department of Statistics
    structured optimal design theory arrived with Jack Kiefer. The paper by Wald (1943) was key in its influence on how optimal design theory was formulated and ...
  25. [25]
    The Impact of Global Sensitivities and Design Measures in Model ...
    First, the basics of parameter sensitivity analysis as a key element of optimal experimental design are presented in Section 2. Here, the focus is also on the ...2. Sensitivity Measures · 3. Optimal Design Measures · 4. Case Studies
  26. [26]
    Minimax Efficient Random Experimental Design Strategies With ...
    Feb 10, 2021 · Minimax Efficient Random Experimental Design Strategies With Application to Model-Robust Design for Prediction ... factorial, optimal and minimax ...
  27. [27]
    Optimality Criteria - JMP
    Jul 15, 2025 · The D-optimality criterion is dependent on the assumed model. This is a limitation because often the form of the true model is not known in ...Missing: D_s subset
  28. [28]
    Criteria of optimality | Optimum Experimental Designs
    Oct 31, 2023 · Book cover for Optimum Experimental Designs Optimum Experimental Designs ... 10 Criteria of optimality Purchased. A C Atkinson,. A C Atkinson.
  29. [29]
    Optimal Design of Experiments - SIAM Publications Library
    The optimal design for statistical experiments is first formulated as a concave matrix optimization problem. Using tools from convex analysis, the problem ...
  30. [30]
    (PDF) A critical overview on optimal experimental designs
    **Summary of Optimal Experimental Design Content**
  31. [31]
    A Multi-Objective Optimal Experimental Design Framework for ...
    The set of corresponding objective vectors is called a Pareto front [13]. The methods for solving multi-objective optimization problems can generally be ...
  32. [32]
    [PDF] Generating and Comparing Pareto Fronts of Experiment Designs to ...
    It has been proven theoretically that the weighted sum won't detect elements in a nonconvex part of the Pareto front (Das and Dennis 1997).
  33. [33]
    Designing Experiments with Respect to 'Standardized' Optimality ...
    The paper introduces new 'standardized' optimality criteria based on standardized covariances, offering an alternative to common design theory criteria.
  34. [34]
    A-Optimality Standardized Through the Coefficient of Variation
    Dette (1997) provided a standardization which leads to designs with similar efficiencies for all of the parameters.
  35. [35]
    Optimal Experimental Design for Model Selection: A Partial Review
    Model selection is a core topic in modern Statistics. This is a review of what has been researched on optimal experimental design for model selection.Cite This Paper · 1 Introduction · 3.3 Bayesian Paradigm<|separator|>
  36. [36]
    [PDF] Optimal Experimental Design for Model Selection - SciSpace
    Abstract Model selection is a core topic in modern Statistics. This is a re- view of what has been researched on optimal experimental design for model.
  37. [37]
    Optimal Experimental Design for Model Discrimination - PMC - NIH
    To reiterate, throughout this paper, optimal designs are defined as the ones that maximize the proportion of times in which the true, data-generating model is ...
  38. [38]
    Robust T-optimal discriminating designs - Project Euclid
    As a solution to this problem we propose a Bayesian and standardized maximin approach to construct robust and efficient discrimi- nating designs on the basis of ...
  39. [39]
    Bayesian Optimal Design for Ordinary Differential Equation Models ...
    We evaluate the expected utility by embedding within a Monte Carlo approximation scheme an adaption of the probabilistic solution to systems of differential ...<|separator|>
  40. [40]
    [PDF] Experimental Design to Maximize Information
    This criterion is the well known Bayesian D-optimality, see [1], and it is known to be equivalent to maximizing the expected amount of Shannon information ...
  41. [41]
    Bayesian Experimental Design and Shannon Information
    The information theoretic approach to optimal design of experiments yields a simple design criterion: the optimal design minimizes the expected posterior ...
  42. [42]
    Bayesian optimal experimental design for parameter estimation and ...
    The Bayesian OED is based on maximizing the expected utility function taken as the Kullback-Leibler divergence between the prior and posterior distribution of ...Missing: formula | Show results with:formula
  43. [43]
    Optimal designs for nonlinear regression models with respect to non ...
    In particular, we study the Jeffreys and the Berger–Bernardo prior for which the corresponding optimality criteria are not necessarily concave. Several examples ...
  44. [44]
    Optimal designs for nonlinear regression models with respect to non ...
    Nov 4, 2013 · In this paper we investigate Bayesian optimality criteria with non-informative prior dis- tributions. In particular, we study the Jeffreys ...
  45. [45]
    [PDF] The formal definition of reference priors - arXiv
    Apr 1, 2009 · Reference analysis produces objective Bayesian inference, in the sense that inferential statements depend only on the assumed model and the ...
  46. [46]
    Optimal Designs for Model Averaging in non-nested Models
    Mar 1, 2021 · We demonstrate that Bayesian optimal designs can improve the accuracy of model averaging substantially. Moreover, the derived designs also ...<|control11|><|separator|>
  47. [47]
    [PDF] Bayesian Model Averaging: A Tutorial - Colorado State University
    Bayesian model averaging (BMA) accounts for model uncertainty by averaging posterior distributions weighted by posterior model probability.
  48. [48]
    Bayesian‐optimized experimental designs for estimating the ...
    Jun 4, 2025 · This study optimizes field experiments for estimating the EONR using a model-averaging approach within a Bayesian framework.
  49. [49]
    (PDF) Theory of Optimal Experiments Designs - ResearchGate
    Mar 9, 2014 · The focus was on the design of optimal inputs that maximize some scalar function of the Fisher information matrix under a constraint on the power of the input ...
  50. [50]
    The Up-and-Down Method for Small Samples - Semantic Scholar
    The Up-and-Down Method for Small Samples · W. Dixon · Published 1 December 1965 · Mathematics · Journal of the American Statistical Association.
  51. [51]
    Sequential Analysis : Wald Abraham : Free Download, Borrow, and ...
    Jan 16, 2017 · Sequential Analysis. by: Wald Abraham. Publication date: 1947. Topics ... PDF WITH TEXT download · download 1 file · SINGLE PAGE PROCESSED JP2 ...
  52. [52]
    Bayesian Experimental Design: A Review - Project Euclid
    Kathryn Chaloner. Isabella Verdinelli. "Bayesian Experimental Design: A Review." Statist. Sci. 10 (3) 273 - 304, August, 1995. https://doi.org/10.1214/ss ...
  53. [53]
    Sequential optimal experimental design for vapor-liquid equilibrium ...
    Dec 5, 2024 · We propose a general methodology of sequential locally optimal design of experiments for explicit or implicit nonlinear models, ...Missing: seminal | Show results with:seminal
  54. [54]
    On the Experimental Attainment of Optimum Conditions
    Box AND WILSON-On the Experimental. [No. I,. One such arrangement consists of ... 1951]. Attainment of Optimum Conditions. 13. This is a design of type B ...
  55. [55]
    Multi-Factor Experimental Designs for Exploring Response Surfaces
    Such designs insure that the estimated response has a constant variance at all points which are the same distance from the center of the design.
  56. [56]
    Laplace's Theory of Inverse Probability, 1774–1786 - SpringerLink
    Laplace's Theory of Inverse Probability, 1774–1786 ... This process is experimental and the keywords may be updated as the learning algorithm improves.Missing: 1780s design
  57. [57]
    [PDF] Some History of Optimality - Rice Statistics
    Opti- mality as a deliberate program for determining good procedures was introduced in. 1933 by Neyman and Pearson in a paper (on testing rather than estimation) ...Missing: Treloar | Show results with:Treloar
  58. [58]
    Fisher, Bradford Hill, and randomization - Oxford Academic
    In the 1920s RA Fisher presented randomization as an essential ingredient of his approach to the design and analysis of experiments, validating significance ...
  59. [59]
    R. A. Fisher and his advocacy of randomization
    Feb 6, 2007 · The requirement of randomization in experimental design was first stated by RA Fisher, statistician and geneticist, in 1925 in his book Statistical Methods for ...
  60. [60]
    [PDF] On the Problem of the Most Efficient Tests of Statistical Hypotheses
    Jun 26, 2006 · On the Problem of the most Efficient Tests of Statistical Hypotheses. By J. NEYMAN, Nencki Institute, Soc. Sci. Lit. Varsoviensis, and ...
  61. [61]
    A Retrospective of Wald's Sequential Analysis—Its Relation to ...
    The theory of sequential analysis was initiated by Wald during the 1940's in response to problems of sampling inspection. Wald's contributions are reviewed, ...
  62. [62]
    The titration of antipneumococcus serum - ResearchGate
    Smith (1932) described a bioassay for an anti-pneumococcus serum in which ... The performance of the uniform design in examined and we show that this ...
  63. [63]
    The Usefulness of Optimum Experimental Designs - jstor
    The D-optimum design for the third-order model likewise includes pure components and binary mixtures. An example is given on p. 137 of Atkinson and Donev (1992) ...
  64. [64]
    Computation of Optimal Identification Experiments for Nonlinear ...
    The problem of optimal experimental design (OED) for parameter estimation of nonlinear ... sensitivity function with the corresponding nominal parameter value ( ...
  65. [65]
    Optimal Designs for Nonlinear Mixed-effects Models Using ... - NIH
    A guiding principle is that the hybridized algorithm should perform better than either of the algorithms used in the hybridization. The aim of this paper is to ...