Fact-checked by Grok 2 weeks ago

Motion planning

Motion planning is a fundamental problem in and that involves computing a collision-free path or for a or mechanical system to move from an initial configuration to a desired goal configuration within an containing obstacles. This process is typically formulated in the robot's configuration space (C-space), which represents all possible positions and orientations of the system, excluding invalid states due to collisions. The core challenge lies in navigating high-dimensional C-spaces efficiently while ensuring the path adheres to kinematic, dynamic, and environmental constraints. The origins of motion planning trace back to the 1970s with the formulation of the "Piano Mover's Problem," which sought exact solutions for moving rigid objects in or spaces. In the 1980s, researchers developed combinatorial algorithms, such as visibility graphs and cell decomposition, providing complete and exact solutions but suffering from exponential computational complexity in higher dimensions. The 1990s marked a breakthrough with the advent of sampling-based methods, like probabilistic roadmaps (PRM) and rapidly-exploring random trees (RRT), which enabled practical planning in complex, high-dimensional spaces by probabilistically exploring the C-space. Key aspects of motion planning include distinguishing between geometric planning (focusing on position and orientation) and kinodynamic planning (incorporating velocity, acceleration, and control inputs for dynamic systems). Algorithms are evaluated on properties such as (guaranteed solution if one exists), optimality (finding the best path), and resolution completeness (handling discretization errors). Challenges persist in handling narrow passages in C-space, real-time computation for dynamic environments, and integration with sensing and control systems. Motion planning finds essential applications in industrial for assembly tasks, autonomous vehicles for , surgical robots for precise interventions, and unmanned aerial vehicles for avoidance. Recent advances incorporate learning-based techniques, such as neural networks for , to address scalability in unstructured environments. Despite progress, ongoing research focuses on optimality guarantees and adaptability to uncertain or changing worlds.

Introduction

Definition and scope

Motion planning is the computational process of determining collision-free paths or trajectories for a , such as a , from an initial to a while respecting environmental and constraints. This involves finding a sequence of valid motions that avoid collisions with obstacles and satisfy kinematic or dynamic limitations of the . The core problem of motion planning is formulated as follows: given an initial state q_{\text{start}}, a state q_{\text{goal}}, and a description of obstacles, compute a valid motion plan that connects q_{\text{start}} to q_{\text{goal}}. This is achieved through a search in the configuration space C, which represents all possible configurations of the system. The scope of motion planning includes kinematic planning, which addresses constraints on position and orientation without considering forces, and dynamic planning, which incorporates and limits arising from the system's and control inputs. It further distinguishes path planning, which yields a static geometric path in space, from full motion planning, which produces a time-parameterized accounting for temporal aspects. Formally, a path in motion planning is defined as a \pi: [0,1] \to C_{\text{free}}, where C_{\text{free}} denotes the collision-free subset of the configuration , with \pi(0) = q_{\text{start}} and \pi(1) = q_{\text{goal}}. Motion planning originated in but has interdisciplinary roots and applications in for and , for geometric problem-solving, for decision-making under uncertainty, and for trajectory generation.

Historical overview

The origins of motion planning trace back to the and , when researchers began exploring problems related to for rigid bodies amid obstacles. Early work focused on theoretical foundations, such as determining feasible motions for objects in constrained environments. A pivotal contribution came in 1977 with S. M. Udupa's introduction of the configuration space concept, which transformed the into a point while expanding obstacles accordingly, enabling systematic collision avoidance analysis. This approach laid the groundwork for geometric methods. In 1979, Tomás Lozano-Pérez and Michael A. Wesley advanced the field by developing algorithms for planning collision-free paths among polyhedral obstacles, incorporating visibility graphs to connect tangent points on obstacles and cell decomposition to partition free space into navigable regions. The 1980s marked a shift toward exact algorithmic solutions for complex scenarios, exemplified by the "piano movers' problem," which involves translating and rotating a rigid object like a through a cluttered 2D or 3D environment. Jacob T. Schwartz and Micha Sharir provided seminal exact methods in a series of papers from 1981 to 1983, analyzing the topological properties of configuration spaces and developing algorithms with polynomial-time complexity for polygonal bodies amid barriers. These works established rigorous complexity bounds and influenced subsequent research in for motion planning. Advancements in the 1990s introduced probabilistic sampling-based techniques to handle high-dimensional spaces intractable for exact methods. In 1996, Lydia E. Kavraki, Petr Svestka, Jean-Claude Latombe, and Mark H. Overmars proposed the Probabilistic Roadmap (PRM) method, which builds a of random configurations connected by straight-line paths, enabling efficient query for diverse environments. Building on this, Steven M. LaValle introduced Rapidly-exploring Random Trees (RRT) in 1998, an incremental tree-growing algorithm that biases exploration toward unexplored areas, proving effective for nonholonomic systems and dynamic constraints. The 2000s saw consolidation and optimization of sampling-based planners, alongside growing community infrastructure. In 2011, Sertac Karaman and Emilio Frazzoli extended RRT to RRT*, an asymptotically optimal variant that refines paths over time through rewiring, integrating seamlessly with optimization frameworks. Around 2000, the IEEE and Automation Society formed the Technical Committee on Algorithms for Planning and Control of Robot Motion to foster research collaboration. Biennial workshops, such as the International Workshop on the Algorithmic Foundations of Robotics (WAFR) since 1994, have further promoted advancements. Post-2010, the field shifted toward learning-based paradigms, with gaining prominence for adaptive planning in uncertain settings from 2015 to 2020. By 2025, data-driven approaches increasingly incorporated large language models (LLMs) for high-level task decomposition and integration with low-level planners, enhancing human-robot interaction in complex scenarios.

Fundamental Concepts

Configuration space

In motion planning, the configuration space, often denoted as C, is the mathematical abstraction that represents all possible states or configurations of a robotic system. Formally, C = \{ q \mid q represents a complete specification of the system's pose, including positions and orientations of all relevant components}. This space encapsulates the (DOF) inherent to the , allowing the complex physical motion problem to be reduced to geometric queries in an abstract domain. The concept was pioneered in the work of Tomas Lozano-Pérez, who introduced configuration space as a tool for among obstacles. The dimensionality of C is determined by the number of DOF, which corresponds to the minimal parameters needed to define any . For a in , this typically yields 6 DOF: three for translational position (x, y, z) and three for rotational orientation (e.g., roll, pitch, yaw). In multi-joint manipulators, such as a 2D planar robot arm with two revolute joints, a configuration q might be specified as q = (\theta_1, \theta_2), where \theta_1 and \theta_2 are joint angles, resulting in a 2-dimensional C. More complex systems, like a , can have dozens or hundreds of DOF, leading to high-dimensional spaces that pose significant computational challenges. Configuration space is distinct from task space, which focuses on the end-effector's position and in the workspace, often a T \subset \mathbb{R}^3 for positional tasks. The mapping from C to task space is provided by forward , a f: C \to T that computes the end-effector pose from parameters q. This separation enables motion to operate in the full C, where joint limits and self-collisions are naturally represented, while task specifications constrain the problem to desired regions in T. Inverse then seeks q such that f(q) achieves a target in T, highlighting the non-trivial between the spaces. A critical aspect of C involves obstacle representation for collision avoidance. A configuration q lies in the obstacle subset C_{\text{obs}} if the robot, when placed at q, intersects any environmental obstacles; otherwise, it belongs to the free space C_{\text{free}} = C \setminus C_{\text{obs}}. This partitioning transforms physical collisions into point queries in C, simplifying detection algorithms. For instance, in a 2D example of a disk-shaped robot navigating among polygonal obstacles, C_{\text{obs}} appears as "grown" regions around the obstacles, expanded by the robot's radius, while the robot itself reduces to a point in this space. Such visualizations aid intuition, revealing how obstacles in the workspace "sweep" forbidden volumes in C. The structure of C often forms a manifold, a smooth, differentiable space that supports continuous paths between configurations, essential for feasible motions. However, in high dimensions—common for advanced robots—the curse of dimensionality arises, exponentially increasing volume and complicating exploration and collision checking. This geometric foundation underpins subsequent partitioning into free and obstacle regions, enabling path planning as a search for collision-free routes within C_{\text{free}}.

Free and obstacle spaces

In motion planning, the configuration space C is partitioned into the free space C_\text{free} and the obstacle space C_\text{obstacle}, which delineate allowable and forbidden configurations for the . The free space consists of all configurations q \in C such that the robot, placed at q, does not intersect with any in the workspace or itself, assuming a static environment with fixed . Formally, C_\text{free} = \{ q \in C \mid A(q) \cap O = \emptyset \}, where A(q) represents the region occupied by the robot at configuration q and O denotes the set of obstacle regions in the workspace. This partition ensures that motion plans remain confined to C_\text{free} to avoid collisions, a foundational step before applying any path-searching techniques. The obstacle space, or C_\text{obstacle}, is the complement of the free space in the full configuration space: C_\text{obstacle} = C \setminus C_\text{free}. To construct C_\text{obstacle}, the 's geometry is incorporated via the Minkowski , which expands workspace obstacles to account for the robot's extent, effectively treating the robot as a point in this augmented space. For a translating rigid with shape R (relative to a reference point) and workspace obstacles O, the corresponding configuration obstacle is given by the expanded form O' = O \oplus (-R), where \oplus denotes the Minkowski defined as A \oplus B = \{ a + b \mid a \in A, b \in B \}. This operation captures all configurations where the reference point placement would cause overlap, enabling efficient representation of collision constraints in higher-dimensional C. Collision detection is essential for verifying whether or lies in C_\text{[free](/page/Free)}, often serving as in planning algorithms due to repeated queries. Common methods include bounding volume hierarchies (BVH), which organize object geometries into tree structures of enclosing volumes (e.g., axis-aligned bounding boxes or oriented bounding boxes) to prune unnecessary intersection tests, and the Gilbert-Johnson-Keerthi (GJK) algorithm, which iteratively computes the minimum distance between shapes using support functions without full shape representation. For simple like spheres or boxes, these algorithms achieve constant-time O(1) per query, though more complex polyhedral models may require O(\log n) time with hierarchical acceleration, where n is the number of . In practice, BVH-based approaches are widely adopted in for their balance of preprocessing cost and query speed in dynamic scenes. Challenges arise at the boundaries of C_\text{free}, particularly in narrow passages where the clearance between obstacles is comparable to the robot's size, complicating the discovery of feasible paths due to the thin of the space. For multi-link manipulators, self-collision must also be modeled within C_\text{obstacle}, treating links as separate bodies whose intersections are forbidden, often increasing the dimensionality and detection overhead. These formulations typically assume systems with non-penetration constraints, meaning the robot can move freely in all directions without velocity limits or interpenetration allowances during .

Paths and trajectories

In motion planning, a path is defined as a continuous, collision-free curve in the configuration space that connects an initial configuration q_{\text{start}} to a goal configuration q_{\text{goal}}. Formally, it is represented as a mapping \gamma: [0,1] \to \mathcal{C}_{\text{free}}, where \mathcal{C}_{\text{free}} denotes the free configuration space, ensuring \gamma(0) = q_{\text{start}} and \gamma(1) = q_{\text{goal}}, with the image of \gamma lying entirely within \mathcal{C}_{\text{free}} to avoid obstacles. Paths are primarily geometric constructs that do not account for timing or dynamics, making them suitable for kinematic planning in static environments where the robot is fully actuated. A extends the concept by incorporating time parameterization, specifying the robot's as a of time while respecting , , and other dynamic constraints. It is typically denoted as \tau: [0,T] \to \mathcal{C}_{\text{free}}, where T is the total duration, \tau(0) = q_{\text{start}}, and \tau(T) = q_{\text{goal}}, with the trajectory satisfying differential equations such as \dot{\tau}(t) = f(\tau(t), u(t)), where f models the and u(t) \in U is the input from the admissible set U. This ensures feasibility under mechanical limits, distinguishing trajectories from paths by adding temporal and kinematic considerations essential for real-world execution. Trajectories can be parameterized in various ways, such as uniform speed along the path for simplicity or minimum-time profiles to optimize duration while adhering to bounds on and . Smoothing techniques, like representations, are often applied to refine raw paths into feasible trajectories by ensuring continuity in position, , and , which helps avoid abrupt changes that could violate joint limits or cause . Paths and trajectories may be resolved in discrete or continuous forms; discrete representations use a of waypoints connected by methods, such as linear for basic straight-line segments or cubic polynomials for smoother velocity profiles that preserve C^2 . In approximate scenarios, the is treated as a G \subset \mathcal{C}_{\text{free}} rather than a single point, allowing the \gamma(1) or \tau(T) to lie anywhere within G to accommodate or computational efficiency.

Classical Algorithms

Combinatorial methods

Combinatorial methods provide , complete solutions to motion planning problems by systematically decomposing the \mathcal{C}_{free} or constructing graphs that capture its , primarily for low-dimensional ( or ) geometric with rigid bodies and polygonal or polyhedral obstacles. These approaches, developed in the , offer optimality guarantees but suffer from high that grows exponentially with dimensionality. Visibility graphs address the problem of finding the shortest collision-free path among polygonal obstacles in by constructing a whose nodes are the start, , and obstacle vertices, with edges connecting pairs that have line-of-sight (tangent segments not intersecting obstacles). The shortest path in this , found via , corresponds to an optimal path that "hugs" obstacle boundaries. Construction requires O(n^2) time for n vertices using rotational sweeps, making it complete and optimal for simply connected environments but impractical beyond due to . Cell decomposition partitions \mathcal{C}_{free} into a finite number of non-overlapping cells (e.g., trapezoids in via vertical decomposition), from which an adjacency is built by connecting adjacent cells. A search on this (e.g., using A*) yields a sequence of cells from start to goal, with a local path planner connecting centers within each cell. Exact decompositions ensure and allow optimality with appropriate costs, but require solving Minkowski sums for robot-obstacle interactions and scale poorly (single exponential in dimension). Approximate variants reduce complexity at the cost of resolution .

Grid-based methods

Grid-based methods discretize the continuous configuration space \mathcal{C} into a finite of , transforming the motion planning problem into a search over feasible in \mathcal{C}_{free}. This approach represents states at centers or points, with edges adjacent that satisfy motion constraints, allowing exhaustive via algorithms. Such enables deterministic , particularly effective in low-dimensional spaces like 2D or 3D environments for mobile robots or manipulators. Grid types include uniform grids, which apply fixed-resolution cells across \mathcal{C}, such as 2D occupancy grids dividing space into equal squares or cubes for straightforward collision checking. Adaptive grids, like octrees in 3D, refine resolution in complex regions near obstacles while coarsening elsewhere, reducing overall cell count through hierarchical subdivision. The resolution trade-off is critical: coarse grids enable fast computation but yield suboptimal, approximate paths due to limited detail, whereas fine grids provide higher accuracy at the cost of exponential memory and time demands. A key algorithm is A*, which performs informed search on the grid graph to find an optimal path by minimizing the f(n) = g(n) + h(n), where g(n) is the exact cost from the start to node n, and h(n) is an estimating the cost from n to the goal. For admissibility in velocity-bounded systems, the is often h(q) = \frac{\|q - q_{goal}\|}{v_{max}}, where \|q - q_{goal}\| is the and v_{max} is the maximum velocity, ensuring h(n) \leq h^*(n) to guarantee optimality in the . Dijkstra's algorithm serves as a special case without heuristics, suitable for uniform costs. These methods are complete and optimal in the discrete , offering guaranteed solutions when they exist, but they suffer from the curse of dimensionality, with cell counts growing exponentially (e.g., approximately $10^6 cells for a 6D space at modest resolution), rendering them impractical for high beyond 3-4 DOF. Variants include propagation, which computes distance fields by iteratively expanding from the using a or , akin to Dial's , to generate smooth potential-based paths while avoiding local minima in grid representations. Hybrid approaches combine grid search with methods, such as refining grid paths via boundary-value solvers for precise avoidance.

Sampling-based methods

Sampling-based methods in motion planning are probabilistic algorithms that generate collision-free paths by randomly sampling points in the configuration (C-space) and constructing an approximate or representation of the free (C_free). These approaches are particularly effective for high-dimensional problems where exhaustive search is infeasible, as they avoid discretizing the entire and instead focus on exploring feasible regions through random exploration. By leveraging local planners to connect samples, they build a or that can be queried for paths between start and goal configurations. The Probabilistic Roadmap (PRM) method, introduced by Kavraki et al., operates in two phases: a preprocessing phase where nodes are uniformly sampled in C_free, and a query phase where paths are found by searching the resulting graph. In the preprocessing phase, a set of N random configurations q_1, q_2, \dots, q_N are generated in C-space, retaining only those that lie in C_free after collision checking. Edges are then added between nearby nodes: specifically, an edge exists between q_i and q_j if \|q_i - q_j\| < \epsilon (for some connection radius \epsilon) and the straight-line path between them is collision-free, verified using a local planner. For query resolution, the start and goal configurations are added to the graph, connected to nearby nodes, and a search (e.g., A* or Dijkstra) finds a path if one exists. This graph approximates the connectivity of C_free, with denser sampling improving coverage. In contrast to PRM's global preprocessing, the Rapidly-exploring Random Tree (RRT) algorithm, developed by LaValle, builds an exploration tree incrementally from the start configuration q_{start} without a separate query phase, making it suitable for single-query planning. The process begins with the tree containing only q_{start}. For each iteration, a random configuration q_{rand} is sampled uniformly in C-space. The nearest node q_{near} in the tree to q_{rand} is found, and a local planner extends from q_{near} toward q_{rand} by a fixed step size to produce q_{new}, which is added to the tree if collision-free. This biased exploration toward unoccupied regions ensures rapid coverage of C_free, with the goal reached when a node is sufficiently close to q_{goal}. RRT is probabilistically complete, meaning the probability of finding a solution (if one exists) approaches 1 as the number of samples n \to \infty. Variants of these methods address specific limitations. RRT-Connect extends RRT bidirectionally by growing two trees simultaneously—one from q_{start} and one from q_{goal}—and connecting them when their branches meet, significantly reducing planning time in practice. For optimality, Informed RRT* builds on RRT* (an asymptotically optimal variant of RRT) by focusing sampling within an ellipsoidal heuristic derived from the current best path cost, enabling anytime convergence to optimal solutions while maintaining probabilistic completeness. These variants inherit the asymptotic properties of their base algorithms, succeeding with probability approaching 1 as iterations increase. Despite their strengths, sampling-based methods face challenges in regions of C_free with low volume, such as narrow passages, where uniform sampling rarely places nodes, leading to inefficient exploration or failure to connect components. To mitigate this, specialized local planners are often used, such as for non-holonomic systems (e.g., car-like robots) that respect curvature constraints instead of straight lines. Performance-wise, constructing the PRM graph typically requires O(n log n) time for n samples when using efficient nearest-neighbor data structures like , with collision checks dominating in high dimensions. These methods scale well to systems with up to 100 degrees of freedom (DOF), as demonstrated in applications like humanoid robot manipulation.

Optimization and potential field methods

Optimization-based and potential field methods in motion planning generate smooth, collision-free trajectories by formulating the problem as a continuous optimization task or by navigating through artificial force fields, contrasting with discrete graph searches. These approaches are particularly suited for real-time applications where dynamic environments require reactive, gradient-driven motion. They often operate in configuration space, producing paths that can be converted to trajectories with velocity and acceleration profiles. Artificial potential field methods treat the robot's configuration q as a particle moving under the influence of virtual forces derived from a total potential function U(q). The potential is composed of an attractive component U_{\text{att}}(q) = \frac{1}{2} \| q - q_{\text{goal}} \|^2, which pulls the robot toward the goal configuration q_{\text{goal}}, and a repulsive component U_{\text{rep}}(q) = \frac{1}{2} \eta \left( \frac{1}{d(q, O)} - \frac{1}{\rho_0} \right)^2 if d(q, O) < \rho_0, where d(q, O) is the distance to the nearest obstacle O, \eta is a positive gain, and \rho_0 is the influence radius of the repulsive field. The total potential is U(q) = U_{\text{att}}(q) + \sum U_{\text{rep}}(q), and the motion is generated by descending the negative gradient -\nabla U(q) using gradient descent updates of the form q \leftarrow q + \eta F \Delta t, where the force F = -\nabla U(q) and \eta is a step size. This method, originally proposed for real-time obstacle avoidance in manipulators and mobile robots, enables reactive planning by continuously recomputing forces based on sensor data. A key advantage is its computational efficiency for real-time reactivity in dynamic settings, but it suffers from local minima traps, such as in U-shaped obstacles where the attractive and repulsive forces balance, halting progress toward the goal. Optimization-based methods frame motion planning as a nonlinear programming problem to minimize a cost functional subject to dynamics and collision constraints, yielding smoother trajectories than potential fields. A prominent example is CHOMP (Covariant Hamiltonian Optimization for Motion Planning), which optimizes a trajectory \tau(t) by minimizing the integral cost \int_0^T c(\tau(t), \dot{\tau}(t)) \, dt, where c penalizes acceleration, velocity, and obstacle proximity, while enforcing dynamics and feasibility. CHOMP uses gradient-based optimization invariant to time reparametrization, making it effective for local refinement of initial paths into smooth, low-cost motions. Variants include Model Predictive Control (MPC), which solves receding-horizon optimizations to generate trajectories over a finite time window, replanning at each step for robustness to disturbances in robotic systems like autonomous vehicles. Sequential Quadratic Programming (SQP) approximates the nonlinear problem through iterative quadratic subproblems, enabling efficient handling of constraints in trajectory optimization for manipulators and mobile platforms. These methods excel in producing smooth paths in low-dimensional spaces, such as 2D or 6-DOF manipulation, but scale poorly to high dimensions without approximations.

Modern and Learning-Based Approaches

Reinforcement learning techniques

Reinforcement learning (RL) techniques address motion planning by enabling agents to learn policies through trial-and-error interactions with the environment, particularly in model-free settings where the dynamics are unknown. In these approaches, the state s typically encompasses the robot's configuration q, velocity, and obstacle information, while actions a correspond to control inputs such as joint torques or velocities. Algorithms like Q-learning and Proximal Policy Optimization (PPO) have been adapted for motion tasks, allowing robots to discover collision-free paths without explicit geometric modeling. The core formulation models motion planning as a Markov Decision Process (MDP), where the agent seeks a policy \pi(a|s) that maximizes the expected cumulative return. Rewards are often designed as r(s,a) = -d(s, g) - c \cdot \mathbb{I}_{\text{collision}}(s'), with d(s, g) denoting the distance to the goal g, c a penalty coefficient, and \mathbb{I} the collision indicator, encouraging progress while penalizing unsafe actions. The value function follows the Bellman equation for Q-learning variants: Q(s,a) = r(s,a) + \gamma \max_{a'} Q(s', a') where \gamma \in [0,1) is the discount factor, and s' is the next state; this enables value iteration to converge to an optimal policy in discrete spaces, extended to continuous ones via deep approximations. Key algorithms include Deep Deterministic Policy Gradient (DDPG), introduced in 2015 for continuous action spaces in robotics, which combines actor-critic methods to handle high-dimensional control for tasks like robotic arm manipulation. More recently, RL has guided sampling-based planners, such as using learned biases in Rapidly-exploring Random Trees (RRT) to bias exploration toward promising regions, improving efficiency in cluttered environments post-2020. Surveys from 2024-2025 on (RL-MoP) highlight efforts to mitigate sample inefficiency through , which decomposes tasks into high-level goal selection and low-level trajectory execution, or model-based pre-training to initialize policies with simulated dynamics. Challenges persist, including the sim-to-real gap, where policies trained in simulation underperform on physical robots due to unmodeled friction and sensor noise, addressed via . Safety during exploration is another hurdle, tackled through that enforce hard limits on risk via Lagrangian penalties or shielding. Integration of RL often positions it for high-level decisions, such as selecting waypoints or modes, while delegating low-level motion to classical planners like potential fields for guaranteed smoothness. This hybrid approach leverages RL's adaptability in uncertain settings alongside deterministic guarantees.

Data-driven and neural network methods

Data-driven and neural network methods in motion planning leverage supervised learning techniques to approximate complex planning problems by training on datasets of expert-generated trajectories. These approaches typically involve collecting high-quality demonstrations from classical planners or human experts, then using neural networks to map input configurations—such as start states q_{\text{start}}, goal states q_{\text{goal}}, and environmental maps—to feasible paths. For instance, neural motion planners can predict collision-free trajectories directly from sensor data and goal specifications, bypassing exhaustive geometric searches. This paradigm shifts the computational burden from online planning to offline training, enabling rapid inference in real-time applications like robotics and autonomous vehicles. Common architectures include convolutional neural networks (CNNs) for processing occupancy grids or image-based maps to predict free space and trajectory waypoints. Graph neural networks (GNNs) extend this by modeling scenes as graphs, where nodes represent obstacles or configurations and edges capture spatial relationships, allowing the network to propagate information for path prediction in structured environments. Post-2020 developments have incorporated transformers for sequence modeling, treating trajectories as ordered sequences and using self-attention mechanisms to capture long-range dependencies in dynamic scenes. These architectures are trained end-to-end on labeled trajectory data, often achieving sub-second planning times compared to seconds or minutes for traditional methods. A core formulation for these neural planners is a parameterized policy \pi_\theta(q_{\text{start}}, q_{\text{goal}}, \text{map}) \approx optimal path, where \theta denotes the network weights, and the output is a sequence of waypoints approximating the expert trajectory. The policy \pi_\theta is trained by minimizing the expected loss \theta^* = \arg\min_\theta \mathbb{E}[\mathcal{L}(\pi_\theta(q_{\text{start}}, q_{\text{goal}}, \text{map}), \tau^*)], where \tau is the predicted trajectory, \tau^* is the ground-truth expert path, and training minimizes a mean squared error (MSE) loss on trajectory points, often augmented with collision penalties. Variants of these methods emphasize imitation learning strategies, such as behavioral cloning (BC), which directly regresses expert actions from states, and dataset aggregation (DAgger), which iteratively collects corrective demonstrations to address distribution shift. In autonomous vehicle contexts, end-to-end planners integrate neural networks with model predictive control (MPC), using learned dynamics models to optimize trajectories while ensuring safety constraints. These approaches have demonstrated success in urban driving scenarios, generating smooth, human-like paths with minimal collisions. Recent advances, as surveyed in 2025, integrate deep neural networks with large language models (LLMs) for high-level planning queries, where LLMs decompose natural language instructions into subgoals that guide neural trajectory generation. Diffusion models have emerged for trajectory synthesis (2023–2025), probabilistically sampling diverse paths from noise conditioned on start-goal pairs and maps, offering improved handling of multi-modal environments over deterministic networks. These methods provide fast inference, often in milliseconds versus seconds for classical algorithms, enabling deployment in time-critical systems. However, they face challenges in generalization to unseen environments, requiring large diverse datasets to mitigate overfitting and covariate shift.

Task and motion planning integration

Task and motion planning (TAMP) integrates high-level symbolic task planning with low-level continuous motion planning to address complex, long-horizon robotic problems that require both discrete decision-making and geometric feasibility. In TAMP, problems are decomposed into a hierarchy where symbolic tasks—such as grasping or navigating—are represented using languages like the Planning Domain Definition Language (PDDL), while geometric motions ensure physical realizability in continuous spaces. This hybrid approach searches over a task-motion hierarchy, interleaving discrete planning to generate action sequences with continuous planning to validate and refine trajectories, enabling robots to handle environments with obstacles and manipulation constraints. Key components of TAMP frameworks include a discrete planner, such as the Fast-Forward (FF) heuristic search algorithm for generating task skeletons from PDDL specifications, interleaved with a continuous planner like Probabilistic Roadmap (PRM) for feasibility checks on motions. The integration often proceeds iteratively: the discrete planner proposes symbolic actions with parameters (e.g., object locations), which the continuous planner evaluates by sampling collision-free paths; failures refine the search by pruning infeasible branches. Optimization in these frameworks typically minimizes a hybrid cost function, defined as the sum of discrete task costs (e.g., action durations) and continuous motion lengths (e.g., path integrals), often represented and solved using AND/OR graphs to decompose the search space into alternative subplans. Prominent algorithms in TAMP include PDDLStream (2019), which extends PDDL with stream-based sampling to integrate symbolic planners and blackbox solvers like motion planners, reducing the need for full discretization by lazily generating continuous solutions during search. Recent advancements, highlighted in 2024 surveys, incorporate large language models (LLMs) for language-guided TAMP, such as LLM3, where pre-trained models like GPT-4 propose and refine action sequences informed by natural language instructions, bridging high-level goals with motion primitives. Challenges in TAMP persist, particularly in scaling to long-horizon tasks due to the exponential growth in search spaces from combining discrete and continuous dimensions, and in grounding symbolic representations to precise geometry amid perceptual uncertainties. Developments in the 2020s have emphasized bimanual and mobile manipulation scenarios, with frameworks like leveraging hierarchical reinforcement learning (RL) to learn reusable task primitives that accelerate motion feasibility checks. Additionally, RL integration has focused on learning heuristics for backjumping in search trees, improving efficiency for contact-rich tasks without sacrificing probabilistic completeness.

Analysis and Properties

Probabilistic completeness and optimality

In motion planning, resolution completeness refers to the property of grid-based algorithms that guarantees finding a solution if one exists, provided the grid resolution is sufficiently fine relative to the problem's clearance requirements. This form of completeness arises from discretizing the configuration space into a lattice, where finer grids approximate continuous spaces more accurately, ensuring that narrow passages are navigable once the cell size falls below a critical threshold determined by obstacle geometry. For instance, algorithms like on uniform grids achieve this by exhaustively searching the discrete graph, converging to a valid path as resolution increases, though they may fail at coarse resolutions due to discretization errors. Probabilistic completeness (PC) provides a weaker but more practical guarantee for sampling-based planners in high-dimensional spaces, stating that the probability of failure approaches zero as the computation time or number of samples tends to infinity, assuming a solution exists and the free space has positive measure. Pioneered in the (PRM), this property holds because random sampling densely covers the configuration space over iterations, with collision checks ensuring valid connections in the roadmap graph. Similarly, the (RRT) algorithm exhibits PC by incrementally growing a tree from the start configuration toward the goal, probabilistically exploring the space through biased sampling and steering, though it may require extensions like RRT-Connect for bidirectional search to improve efficiency. In practice, PC implies high success rates in benchmarks, but failure probability decays exponentially with samples only under uniform sampling assumptions. For motion planning under uncertainty, such as partial observability or stochastic dynamics, probabilistic completeness extends to belief space—the space of probability distributions over possible states—where planners sample and propagate beliefs to account for estimation errors. In this setting, PC ensures that the probability of finding a feedback policy that maintains beliefs within safe bounds approaches one as sampling increases, provided the belief space is continuous and the uncertainty model allows measurable coverage. Algorithms like Feedback-based Information Roadmaps (FIRM) achieve this by constructing roadmaps in belief space, discretizing distributions (e.g., via particles or Gaussians) and verifying reachability under maximum likelihood observations, thus handling real-world robotics scenarios with sensor noise. This extension is crucial for tasks like localization-aware navigation, where state-space PC would fail due to unmodeled uncertainties. Optimality in motion planning addresses not just existence but the quality of solutions, with asymptotic optimality (AO) being a key guarantee for advanced sampling-based methods: as computation time approaches infinity, the cost of the returned path converges almost surely to the optimal cost c^*. This is formalized by refining a search graph (e.g., roadmap or tree) until the path cost from start q_\text{start} to goal q_\text{goal} satisfies \text{cost}(q_\text{start}, q_\text{goal}) \leq c^* (1 + \epsilon) for any \epsilon > 0, achieved through rewiring to minimize costs during . \text{cost}(q_\text{start}, q_\text{goal}) \leq c^* (1 + \epsilon) Such convergence relies on dense sampling and informed selection, as in RRT*, which extends RRT by adding a cost-optimizing rewiring step after each insertion, or FMT*, which uses a batch-sampling approach with dynamic programming to propagate optimal costs across a grid-like witness set. Anytime optimality builds on AO by allowing interruption at any time, returning progressively better solutions without restarting, making these planners suitable for time-constrained real-time applications. Evaluation of these properties often relies on metrics like success rate (fraction of solved instances within time budgets) and path quality, measured by length, smoothness (e.g., or jerk), or relative to optimal. Benchmarks such as those generated by MotionBenchMaker provide standardized datasets for tasks, enabling comparisons across planners. These metrics highlight trade-offs, as planners like FMT* may require more than PC variants but yield superior trajectories in high-impact applications like autonomous . Recent advances as of 2025 have extended these properties to learning-based methods, such as neural network-guided sampling and tensor-based , which maintain probabilistic completeness and asymptotic optimality while improving efficiency in high-dimensional and dynamic environments.

Computational complexity

The motion problem is known to be PSPACE-hard, even for in three-dimensional environments with polygonal obstacles. This hardness arises from the need to explore a configuration space that can encode complex computational problems, such as simulating Turing machines through configurations. Early algorithms for solving the problem exhibited doubly exponential time complexity in the number of (DOF), stemming from the in the arrangement of configuration space obstacles. Optimal motion planning, which seeks the shortest or minimal-cost , is NP-hard even in relatively simple cases, such as finding the shortest for a translating amid polyhedral obstacles in . In , sampling-based methods offer more tractable alternatives, with construction in probabilistic roadmaps typically requiring O(n \log n) time for n milestones, leveraging efficient nearest-neighbor searches in low dimensions. Grid-based methods, while complete under sufficient resolution, incur of O(r^d), where r is the grid resolution and d is the configuration space dimension, leading to rapid growth that limits their applicability beyond low dimensions. High-dimensional problems, such as planning for robotic arms with 100 DOF, exacerbate these challenges due to of dimensionality, where exhaustive becomes infeasible. Anytime algorithms address this by performing incremental computation, providing initial suboptimal solutions quickly and refining them over time to balance completeness with practicality. Empirical benchmarks demonstrate that sampling-based planners scale effectively to higher dimensions, outperforming exact methods in cluttered environments. To enable real-time performance, trade-offs favor suboptimal planners like greedy variants of rapidly-exploring random trees (RRT), which prioritize speed over global optimality while maintaining probabilistic completeness in expansive spaces.

Problem Variants

Dynamic and differential constraints

Motion planning under dynamic and differential constraints extends classical geometric planning by incorporating the physical limitations of robotic systems, such as velocity bounds, acceleration limits, and non-holonomic behaviors that restrict instantaneous motion directions. Kinematic constraints primarily address velocity limits and directional restrictions, ensuring that planned paths respect the robot's inability to move sideways or instantaneously change orientation. For car-like robots, which model vehicles with a minimum turning radius due to fixed wheelbases, optimal paths under these constraints are well-characterized by Dubins paths for forward-only motion and Reeds-Shepp paths when reversing is allowed. Dubins paths, introduced in 1957, consist of at most three segments—straight lines and circular arcs of fixed radius—connecting initial and final poses while minimizing length. Reeds-Shepp paths, from 1990, extend this to include backward motions, yielding up to 48 possible path types but typically fewer for optimality, enabling maneuvers like parking in tight spaces. A example of non-holonomic kinematic constraints arises in car-like or models, where the robot's is aligned with its heading. The differential equations governing such motion are: \begin{align*} \dot{x} &= v \cos \theta, \\ \dot{y} &= v \sin \theta, \\ \dot{\theta} &= \omega, \end{align*} with v as linear and \omega as , both bounded by the system's capabilities; feasible paths must satisfy conditions derived from these equations, ensuring despite the constraints. These constraints reduce the controllable compared to systems, requiring paths that accumulate orientation changes through curvature rather than direct translation. Dynamic constraints introduce full equations of motion, modeled as \dot{\mathbf{x}} = f(\mathbf{x}, \mathbf{u}) where \mathbf{x} is the state (position, velocity, etc.) and \mathbf{u} lies in a compact control set, capturing inertia, forces, and torques. Kinodynamic planning addresses these by synthesizing trajectories that obey both kinematic and dynamic laws, often treating the problem as optimal control under differential constraints. Algorithms for kinodynamic planning include lattice planners, which discretize the state space into a graph of precomputed motion primitives—short, feasible trajectory segments under discrete controls—allowing efficient search via A* or similar methods while approximating continuous dynamics. For instance, lattice planners generate symmetric, repeating control sequences offline, enabling real-time planning in dynamic environments by evaluating swept volumes for collision avoidance. Sampling-based methods like Rapidly-exploring Random Trees (RRT) adapt to by integrating s via (ODE) solvers to propagate states forward in time, ensuring collision-free branches that respect the model. The Kinodynamic RRT* variant, from , extends this to asymptotic optimality by rewiring with optimal local s, suitable for linear differential constraints. Challenges in these approaches include distinguishing feasible sets (states satisfying constraints) from reachable sets (states attainable from the ), as may permit theoretically possible but practically inaccessible configurations due to bounds. Time-optimal under often invokes Pontryagin's minimum principle, which provides necessary conditions for optimality by minimizing the along trajectories, guiding bang-bang strategies for minimal-time paths. Variants of these constraints appear in underactuated systems, where the number of control inputs is fewer than the , such as quadrotor drones with four rotor thrusts controlling six-dimensional pose (three translational, three ). In such cases, planning must exploit coupling between translation and , often using to generate feasible paths in the reduced control space while avoiding obstacles. These systems highlight the need for holistic state-space exploration, as underactuation limits direct control over all axes, necessitating indirect maneuvers like tilting for lateral motion.

Uncertainty and environmental factors

Motion planning under uncertainty addresses challenges arising from imperfect information, such as noise and partial of the , which can lead to unreliable state estimates and potential collisions during execution. In robotic systems, sensor measurements often include , particularly in localization tasks where position estimates are corrupted by , degrading the accuracy of path trajectories in cluttered or dynamic settings. To model such partial observability, Partially Observable Markov Decision Processes (POMDPs) are employed, representing the robot's state as a belief distribution b(q) over possible configurations q, which encapsulates epistemic about the true state. In POMDPs for motion planning, the belief state is updated using a Bayes filter upon receiving an z after a, given by the equation: b'(q') = \sum_{q} O(z \mid q') T(q' \mid q, a) b(q) where O(z \mid q') is the observation likelihood and T(q' \mid q, a) is the transition probability; this update propagates through the horizon, enabling decisions that hedge against possible worlds. For moving obstacles, which introduce temporal variability and prediction errors, methods like compute collision-free velocities by mapping relative motions into a velocity space, avoiding sets that lead to intersections within a time horizon. An extension, Optimal Reciprocal Collision Avoidance (ORCA), assumes cooperative agents and selects velocities that reciprocally guarantee safety by taking half the responsibility for evasion, scaling efficiently to multiple dynamic entities. Receding horizon complements these by repeatedly optimizing short-term trajectories while accounting for predicted obstacle motions, re- at each step to adapt to updates. Specific algorithms mitigate these uncertainties: Rapidly-exploring Random Trees (RRBT) extend sampling-based methods like RRT into , growing trees over distributions to find feasible plans under partial , with asymptotic optimality in continuous domains. Chance-constrained optimization formulates safety as probabilistic guarantees, solving for trajectories where the probability of collision P(\text{collision}) \leq [\delta](/page/Delta) (e.g., \delta = 0.01) using approximations or scenario optimization, ensuring robustness without over-conservatism. Environmental factors like deformable objects and wind further complicate planning by altering geometry or applying stochastic forces mid-execution. For deformable objects, such as in surgical , uncertainty propagates through non-rigid interactions, requiring belief-space planners that sample over possible deformations to maintain margins. Wind disturbances in aerial systems introduce additive perturbations to dynamics, modeled as Gaussian processes, necessitating online replanning to correct deviations; the Dynamic Window Approach (DWA) evaluates admissible velocity windows in real-time, selecting controls that maximize progress while respecting acceleration limits and predicted environmental shifts. Recent advancements integrate (RL) techniques for robust motion planning in uncertain environments, such as learning policies for aerial vehicles in windy conditions.

Multi-agent and hybrid systems

Multi-agent motion planning extends single-agent problems to scenarios involving multiple robots or entities that must coordinate to achieve collision-free paths while satisfying individual goals. In centralized approaches, the joint configuration space is searched holistically, defined as C_{\text{joint}} = C_1 \times C_2 \times \cdots \times C_n, where C_i is the configuration space of agent i, and collision avoidance requires that the distance between any pair of configurations q_i and q_j exceeds a safety threshold for all i \neq j. A prominent centralized method is Conflict-Based Search (CBS), which optimally resolves conflicts by building a conflict tree and replanning individual paths iteratively, achieving completeness and optimality in discrete spaces but scaling poorly with agent count due to exponential search complexity. Decoupled methods, such as prioritized , address by sequencing agents according to a priority order, where higher-priority agents plan first without considering others, and lower-priority ones treat predecessors as dynamic obstacles. This approach reduces computational demands but risks suboptimal solutions or deadlocks, where agents block each other indefinitely in constrained environments. Coordination mechanisms enhance planning; for instance, communication protocols allow agents to intentions in , while auction-based task allocation dynamically assigns goals to minimize conflicts in heterogeneous teams. Algorithms like Multi-RRT extend sampling-based methods to multi-agent settings by growing trees in the joint space or coordinating individual RRTs with conflict checks, enabling efficient exploration in continuous high-dimensional environments. Hybrid systems introduce discrete mode switches alongside continuous dynamics, such as a transitioning between walking and driving modes, modeled using hybrid automata that capture guarded transitions between continuous subspaces. These automata represent the system's evolution through discrete events triggering mode changes, ensuring reachability analysis accounts for both and switching constraints. Motion planning in such systems often employs mixed-integer programming (MIP), formulating discrete choices as binary variables within an optimization framework to jointly solve for trajectories and mode sequences, as seen in formulations minimizing subject to collision and dynamics constraints. Challenges include exponential state growth from the product of mode combinations and deadlocks during transitions, exacerbated in multi-agent hybrids where inter-agent coordination must synchronize modes. In the 2020s, developments have emphasized , where decentralized algorithms enable large-scale coordination without central control, drawing on bio-inspired flocking models for emergent collision avoidance. Heterogeneous teams, such as UAV-ground pairings, leverage complementary capabilities—e.g., aerial scouting informing ground —through layered that decouples high-level task allocation from low-level motion, improving efficiency in dynamic scenarios. These advances mitigate earlier issues via distributed computation, though they retain sensitivity to communication delays and partial akin to single-agent .

Applications

Robotics and manipulation

Motion planning is essential in robotics and manipulation, where it enables precise control of robotic arms, mobile platforms, and humanoid systems to perform tasks in cluttered or dynamic environments. For industrial robotic arms, inverse kinematics solvers are integrated with trajectory planning to execute pick-and-place operations while avoiding collisions in shared human-robot workspaces. Sampling-based methods like Probabilistic Roadmaps (PRM) and Rapidly-exploring Random Trees (RRT) generate feasible paths by constructing graphs in configuration space, ensuring obstacle clearance through collision checks. Optimization techniques such as CHOMP further refine these paths for smoothness, with simulations on 6-DOF arms like the ABB IRB 120 demonstrating path lengths of approximately 0.76 m and success rates up to 100% in dynamic obstacle scenarios using methods like CHOMP and RRT-connect. In mobile robotics, motion planning often incorporates (SLAM) for environment modeling, paired with Adaptive Monte Carlo Localization (AMCL) for pose estimation and A* for global to navigate indoor spaces efficiently. For locomotion, planners focus on whole-body coordination, generating footstep sequences and joint trajectories that maintain and avoid obstacles, as seen in modular frameworks that decompose complex motions into reusable action primitives. These approaches enable stable traversal over uneven terrain, with computation times under 500 ms for real-time execution on platforms, as demonstrated in frameworks achieving 73-82 ms per step. Manipulation tasks extend motion planning to include grasp synthesis and dual-arm coordination, where algorithms select contact points on objects before computing collision-free trajectories for transport or assembly. Grasp planning uses predefined databases of feasible poses, integrated with randomized inverse kinematics to achieve solutions in 60 ms for single-arm tasks and 278 ms for dual-arm re-grasping of objects like a . Task and Motion Planning (TAMP) bridges high-level sequencing—such as block-stacking to maximize tower height—with low-level optimization, employing mixed-integer linear programming (MILP) to handle discrete actions and continuous dynamics in tabletop scenarios. Seminal demonstrations occurred during the Robotics Challenge (2013–2015), where teams applied PRM variants for manipulation in disaster-response tasks, such as valve turning and debris clearance, highlighting the need for robust in unstructured settings. A 2024 survey on dynamic environment manipulators underscores advancements in learning-based planners like TD3, which adapt to moving obstacles with computation speeds around 0.015 s per trial, while faster methods like APF achieve 6.67 × 10^{-4} s. Challenges persist in contact-rich motions, exemplified by pushing tasks that model frictional interactions via optimization and semidefinite relaxations, yielding near-optimal plans in seconds for simulated push-box scenarios. integration with feedback requires hybrid force-position to adjust trajectories based on updates, addressing uncertainties in object poses. Sampling methods like PRM facilitate implementation by probabilistically exploring high-dimensional spaces for these contacts. Evaluation metrics emphasize practicality, with cycle times measuring end-to-end task duration and success rates quantifying reliability in benchmarks like the Yale-CMU-Berkeley (YCB) object set. In YCB protocols, pick-and-place of household items achieves success rates often around 80% for rigid objects like mugs but lower for precise tasks like peg insertion under perturbations, providing standardized context for planner performance. As of 2025, learning-based methods have further integrated into applications like surgical for enhanced precision in unstructured anatomies.

Autonomous vehicles and navigation

Motion planning for autonomous vehicles (AVs) often employs a hierarchical approach to balance global route optimization with local reactive adjustments. In this framework, global planning utilizes algorithms like A* to generate efficient, high-level paths that consider overall goals such as reaching a destination while adhering to road networks. Local planning then refines these paths using artificial potential fields (APF), which treat obstacles as repulsive forces and goals as attractive forces to enable real-time collision avoidance in dynamic environments. Behavior planning complements this by incorporating finite state machines (FSMs) to manage discrete maneuvers, such as lane changes, where states represent modes like "maintain lane," "check gap," or "execute merge," transitioning based on sensor data and traffic rules. A key aspect of AV trajectory generation is minimizing jerk—the third of —to ensure passenger comfort and smooth control inputs. This is typically formulated as an to minimize the of squared jerk, \int ||\ddot{\tau}(t)||^2 \, [dt](/page/DT), to constraints like limits, bounds, and traffic regulations. For example, and systems integrate (MPC) for , where MPC predicts future states over a receding horizon and solves constrained optimizations to generate feasible paths that incorporate and environmental interactions. In unmanned aerial vehicles (UAVs) and drone swarms, motion planning extends to three-dimensional spaces, emphasizing that accounts for disturbances and ensures collision-free paths in cluttered . Optimization techniques model as dynamic forces in the , adjusting velocities and headings to maintain while minimizing . Recent surveys highlight sampling-based methods for planning, where variants of rapidly-exploring random trees (RRT) generate diverse trajectories that avoid static and moving obstacles in urban or cooperative swarm scenarios. In applications, RRT variants like RRT* provide asymptotically optimal paths by rewiring trees to shorten distances, enabling high-speed navigation through gates under tight timing constraints. Advancements in data-driven methods have transformed AV planning, with 2025 reviews emphasizing the integration of (RL) and neural networks to learn policies from vast datasets of driving scenarios. These approaches enable end-to-end systems that process raw inputs directly into control outputs, bypassing modular pipelines for more adaptive behavior in unstructured environments. RL agents, for instance, optimize long-term rewards for safe merging or by simulating interactions with simulated traffic. Significant challenges persist in achieving high-frequency real-time performance while handling uncertainty from dynamic elements. Motion planners must incorporate probabilistic models to predict trajectories and maintain safety margins, often using sampling for robust path evaluation. Vehicle-to-everything (V2X) communication addresses multi-agent coordination by sharing intent and state data among AVs, reducing collision risks in dense traffic through decentralized .

References

  1. [1]
    [PDF] Motion Planning: The Essentials - Steven M. LaValle
    I. INTRODUCTION. Motion planning involves getting a robot to automat- ically determine how to move while avoiding collisions with obstacles. Its original ...
  2. [2]
    10.1. Overview of Motion Planning – Modern Robotics
    Motion planning involves finding a collision-free path from an initial state to a desired final state, given a time and controls.
  3. [3]
    Motion planning for robotics: A review for sampling-based planners
    A key challenge is developing efficient motion planning algorithms that allow robots to navigate complex environments while avoiding collisions and ...
  4. [4]
    A Survey of Optimization-based Task and Motion Planning - arXiv
    Optimization-based motion planning is an important component in robot planning. It aims to generate a continuous robot motion path and a control sequence ...
  5. [5]
    Survey of optimal motion planning - Yang - 2019 - IET Journals - Wiley
    Sep 22, 2020 · The objective of this article is to survey the current state-of-the-art on optimal motion planning algorithms in terms of their three main ...1 Introduction · 2 Decision Variables And... · 3 Constraints For Optimal...Missing: key | Show results with:key<|separator|>
  6. [6]
    [PDF] Planning Algorithms - Steven M. LaValle
    ... planning part of robotics, which includes motion planning, trajectory planning, and planning under uncertainty. This is only one part of the big picture in ...
  7. [7]
    Principles of Robot Motion: Theory, Algorithms, and Implementations
    Abstract. Principles of Robot Motion is a comprehensive overview of robot motion planning which has become a major focus of robotics research.
  8. [8]
    [PDF] Motion Planning: A Journey of Robots, Molecules, Digital Actors ...
    Jul 1, 1999 · Motion planning has also found applications outside the realm of traditional robotics, such as design for manufacturing, graphic animation and ...Missing: roots | Show results with:roots
  9. [9]
    Space maps manipulation for robot motion planning - ScienceDirect
    17. S.M. Udupa. Collision detection and avoidance in computer controlled manipulators. Ph.D. thesis, Department of Electrical ...
  10. [10]
    An algorithm for planning collision-free paths among polyhedral ...
    An algorithm for planning collision-free paths among polyhedral obstacles. Authors: Tomás Lozano-Pérez. Tomás Lozano-Pérez. Massachusetts Institute of ...
  11. [11]
    On the “piano movers'” problem I. The case of a two‐dimensional ...
    We present an algorithm that solves a two-dimensional case of the following problem which arises in robotics.
  12. [12]
  13. [13]
    [PDF] Rapidly-Exploring Random Trees: A New Tool for Path Planning
    We introduce the concept of a Rapidly-exploring Ran- dom Tree (RRT) as a randomized data structure that is designed for a broad class of path planning problems.
  14. [14]
    Sampling-based algorithms for optimal motion planning
    The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically optimal, i.e. such that the ...
  15. [15]
    Algorithms for Planning and Control of Robot Motion
    Jan 21, 2022 · The technical committee for Algorithms for planning and Control of Robot Motion promotes algorithms research, both basic and application-driven, towards these ...
  16. [16]
    WAFR 2022
    The 15th International Workshop on the Algorithmic Foundations of Robotics will be held on June 22-24, 2022 at the University of Maryland, College Park.
  17. [17]
    [PDF] Spatial Planning: A Configuration Space Approach - MIT
    An early paper [18] reports on a program for planning the path of a two-dimensional sofa through a corridor. This program does a brute-force graph search ...
  18. [18]
    4. The Configuration Space - Steven M. LaValle
    The state space for motion planning is a set of possible transformations that could be applied to the robot. This will be referred to as the configuration space ...
  19. [19]
    [PDF] Efficient Collision Checking in Sampling-based Motion Planning via ...
    Abstract. Collision checking is considered to be the most expensive computational bottleneck in sampling-based motion planning algorithms.
  20. [20]
    Maximum clearance
    Maximum clearance. One problem that typically arises in mobile robotics is that optimal motion plans bring robots too close to obstacles.
  21. [21]
    [PDF] PLANNING ALGORITHMS - Steven M. LaValle
    Due to many exciting developments in the fields of robotics, artificial intelligence, and control theory, three topics that were once quite distinct are ...
  22. [22]
    Ch. 6 - Motion Planning
    ### Summary of Paths, Trajectories, Parameterization, and Interpolation Methods in Motion Planning
  23. [23]
    8.2.3 Grid-Based Navigation Functions for Motion Planning
    ... Grid-Based Navigation Functions for Motion Planning. ... Wavefront propagation algorithms · Maximum clearance · Dial's algorithm · Other ...
  24. [24]
    Probabilistic roadmaps for path planning in high-dimensional ...
    Abstract: A new motion planning method for robots in static workspaces is presented. This method proceeds in two phases: a learning phase and a query phase.
  25. [25]
    [PDF] Probabilistic Roadmaps for Path Planning in High-Dimensional ...
    4, AUGUST 1996. Probabilistic Roadmaps for Path Planning in. High-Dimensional Configuration Spaces. Lydia E. Kavraki, Petr Švestka, Jean-Claude Latombe, and ...
  26. [26]
    RRT-connect: An efficient approach to single-query path planning
    Abstract: A simple and efficient randomized algorithm is presented for solving single-query path planning problems in high-dimensional configuration spaces.
  27. [27]
    [PDF] Informed RRT*: Optimal Sampling-based Path Planning Focused via ...
    Optimal RRTs (RRT*s) extend RRTs to the problem of finding the optimal solution, but in doing so asymptotically find the optimal path from the initial state to.
  28. [28]
    Sampling-based Algorithms for Optimal Motion Planning - arXiv
    May 5, 2011 · The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT ... [v1] Thu, 5 May 2011 22:25:22 UTC (8,601 KB).
  29. [29]
    [PDF] Sample-Driven Connectivity Learning for Motion Planning in Narrow ...
    The key to addressing this challenge is to infer locations of narrow passages from previous samples or local geometric information and then guide sampling ...
  30. [30]
    [PDF] Narrow Passage Sampling for Probabilistic Roadmap Planning
    Narrow passages in PRM planning are small regions that change connectivity. This paper proposes a hybrid sampling strategy using a bridge test to address this  ...
  31. [31]
    [PDF] Robotic Motion Planning: Sample-Based Motion Planning
    Sample-based motion planning searches for collision-free paths by sampling points, using a smaller subset of possibilities, and is probabilistically complete.
  32. [32]
  33. [33]
    A Review on Reinforcement Learning for Motion Planning of Robotic ...
    Dec 24, 2024 · This article undertakes a comprehensive review of the landscape of reinforcement learning, offering a retrospective analysis of its application in motion ...
  34. [34]
    Continuous control with deep reinforcement learning - arXiv
    Sep 9, 2015 · We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces.
  35. [35]
    Improved RRT Path Planning Method Incorporating Deep ...
    To address these challenges, this paper proposes an improved DDPG-guided RRT* algorithm (DDPG-RRT*) based on Deep Deterministic Policy Gradient. Firstly, we ...
  36. [36]
    [PDF] Learning-Guided Exploration for Efficient Sampling-Based Motion ...
    May 22, 2022 · In this paper, we propose a new motion planning algorithm that reduces the computational burden of the exploration process. The proposed.
  37. [37]
    [2502.13187] A Survey of Sim-to-Real Methods in RL - arXiv
    Feb 18, 2025 · This survey paper, to the best of our knowledge, is the first taxonomy that formally frames the sim-to-real techniques from key elements of the Markov Decision ...
  38. [38]
    A Survey of Safe Reinforcement Learning and Constrained MDPs
    May 22, 2025 · This survey provides a mathematically rigorous overview of SafeRL formulations based on Constrained Markov Decision Processes (CMDPs) and extensions to Multi- ...
  39. [39]
    [2303.13986] Interpretable Motion Planner for Urban Driving via ...
    Mar 24, 2023 · In this paper, we introduce a hierarchical imitation method including a high-level grid-based behavior planner and a low-level trajectory ...
  40. [40]
    [2006.06248] Graph Neural Networks for Motion Planning - arXiv
    Jun 11, 2020 · This paper investigates the feasibility of using Graph Neural Networks (GNNs) for classical motion planning problems.
  41. [41]
    A survey of learning‐based robot motion planning - IET Digital Library
    This article serves as a survey of various different learning‐based methods that have been applied to robot motion‐planning problems.
  42. [42]
    End-to-End Learning of Autonomous Vehicle Lateral Control via ...
    Apr 10, 2022 · End-to-End Learning of Autonomous Vehicle Lateral Control via MPC Training ... neural network (CNN) and the recurrent neural network (RNN).
  43. [43]
    Robot planning with LLMs | Nature Machine Intelligence
    Apr 23, 2025 · Long horizon planning in robotics can benefit from combining classic control methods with the real-world knowledge capabilities of large language models.
  44. [44]
    [PDF] LLM3: Large Language Model-based Task and Motion Planning ...
    In this work, we employ a pre-trained LLM as both the task planner and the interface between task and motion. We expect that semantic knowledge in the LLM can ...
  45. [45]
    [PDF] On the Probabilistic Completeness of the Sampling-based Motion ...
    Abstract—This paper extends the concept of probabilistic completeness defined for the motion planners in the absence.
  46. [46]
    a Fast Marching Sampling-Based Method for Optimal Motion ... - arXiv
    Jun 15, 2013 · In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT).
  47. [47]
    A Tool to Generate and Benchmark Motion Planning Datasets - arXiv
    Dec 13, 2021 · MotionBenchMaker is designed to be an extensible, easy-to-use tool that allows users to both generate datasets and benchmark them by comparing motion planning ...
  48. [48]
    [PDF] “Complexity of the generalized mover's problem”, John H. Reif, 1979.
    We prove this generalized mover's problem is polynomial space hard. Our proof provides strong evidence that robot movement planning is computationally ...
  49. [49]
    [PDF] Lower Bounds for Shortest Path and Related Problems
    In the first two proofs of NP-hardness of shortest path and dynamic motion planning, each path class represents a satisfying assignment to a set of boolean ...
  50. [50]
    [PDF] Sampling-based Algorithms for Optimal Motion Planning
    Time and space complexity are expressed as a function of the number of samples n, for a fixed environment. Algorithm. Probabilistic. Completeness. Asymptotic.
  51. [51]
    [PDF] Hierarchical Optimization-based Planning for High-DOF Robots
    Abstract—We present an optimization-based motion plan- ning algorithm for high degree-of-freedom (DOF) robots in dynamic environments.
  52. [52]
    Sampling-Based Motion Planning on Sequenced Manifolds
    ... benchmark planner ... seconds), planning is extremely fast (on the order of a fraction of a second for many difficult examples involving a 10-DOF robot).
  53. [53]
    [PDF] Kinodynamic Motion Planning1 - Duke Computer Science
    Abstract: Kinodynamic planning attempts to solve a robot motion problem subject to simulta- neous kinematic and dynamics constraints.
  54. [54]
    Optimal paths for a car that goes both forwards and backwards.
    1990 Optimal paths for a car that goes both forwards and backwards. J. A. Reeds, L. A. Shepp · DOWNLOAD PDF + SAVE TO MY LIBRARY. Pacific J. Math.
  55. [55]
    On Curves of Minimal Length with a Constraint on Average ... - jstor
    By L. E. DUBINS. 1. Introduction and summary. Let a particle pursue a continuously differentiable path from an initial point u to a terminal point v ...
  56. [56]
    [PDF] Fast and Feasible Deliberative Motion Planner for Dynamic ...
    It is a symmetric lattice based on a repeating unit of controls which permits off-line computation of the planner heuristic, motion simulation, and the swept ...
  57. [57]
    [PDF] Kinodynamic RRT*: Optimal Motion Planning for Systems with ...
    May 23, 2012 · Kinodynamic RRT* is an incremental sampling approach for optimal motion planning for robots with linear differential constraints, using a fixed ...
  58. [58]
    [PDF] Motion Planning under Uncertainty using Differential Dynamic ...
    In this paper, we present a method to approximate a locally optimal solution to the POMDP problem with continuous state and action spaces and non-linear dy-.Missing: seminal | Show results with:seminal
  59. [59]
    [PDF] Gaussian Process Motion Planning
    In this paper, we present a novel approach to motion planning using Gaussian processes. In contrast to most existing trajectory optimization algorithms, which ...
  60. [60]
    [PDF] FINDING APPROXIMATE POMDP SOLUTIONS THROUGH BELIEF ...
    This thesis describes a scalable approach to POMDP planning which uses low-dimen- sional representations of the belief space. We demonstrate how to make use of ...
  61. [61]
    [PDF] Partially Observable Markov Decision Processes (POMDPs)
    ▫ Introduction to POMDPs. ▫ Locally Optimal Solutions for POMDPs. ▫ Trajectory Optimization in (Gaussian) Belief Space. ▫ Accounting for Discontinuities in ...
  62. [62]
    [PDF] Motion Planning in Dynamic Environments using Velocity Obstacles
    This method utilizes the concept of Velocity Obstacle (VO) (Fiorini and Shiller, 1993; Fiorini, 1995), which maps the dynamic environment into the robot.
  63. [63]
    [PDF] Reciprocal n-body Collision Avoidance - GAMMA
    In this section we show how to apply the ORCA principle as defined above to per- form n-body collision avoidance among multiple moving robots, and discuss how.
  64. [64]
    [PDF] Rapidly-exploring Random Belief Trees for Motion Planning Under ...
    The Rapidly-exploring Random Tree (RRT) operates by growing a tree in state space, iteratively sampling new states and then “steering” the existing node in ...
  65. [65]
    [PDF] Chance-Constrained Dynamic Programming with Application to Risk ...
    First, we propose an algorithm for CCDP, whereby a joint chance constraint is (conservatively) transformed into an ex- pectation over a summation of indicator ...<|separator|>
  66. [66]
    Motion Planning Under Uncertainty In Highly Deformable ... - NIH
    Uncertainty in motion planning typically originates from noisy actuation and sensing, partial state observations, and uncertainty about the environment.Missing: wind | Show results with:wind
  67. [67]
    [PDF] The Dynamic Window Approach to Collision Avoidance 1 Introduction
    The dynamic window approach proposed in this paper is especially designed to deal with the constraints imposed by limited velocities and accelerations, because ...
  68. [68]
    Robust Predictive Motion Planning by Learning Obstacle Uncertainty
    In this paper, an efficient, robust, and safe motion-planing algorithm is developed by learning the obstacle uncertainties online.
  69. [69]
    Conflict-based search for optimal multi-agent pathfinding
    In this paper we present the Conflict Based Search (CBS) a new optimal multi-agent pathfinding algorithm. CBS is a two-level algorithm that does not convert the ...
  70. [70]
    [PDF] Representation-Optimal Multi-Robot Motion Planning Using Conflict ...
    CBS uses a low-level search to find individual agent paths and a high-level search to find and resolve conflicts between paths. To avoid increasing the ...Missing: paper | Show results with:paper
  71. [71]
    [PDF] Searching with Consistent Prioritization for Multi-Agent Path Finding
    Prioritized planning (Erdmann and Lozano-Pérez 1987) is a decoupled approach for MAPF where agents are ordered by importance according to a predefined total ...
  72. [72]
    [PDF] Deadlock-free, safe, and decentralized multi-robot navigation in ...
    Abstract. We present an approach to ensure safe and deadlock-free navigation for decentralized multi-robot systems operating in.
  73. [73]
    [PDF] Multi-agent RRT*: Sampling-based Cooperative Pathfinding
    Here, we propose MA-RRT*, a novel algorithm for multi-agent motion planning that builds upon a recently proposed ... planning algorithm called RRT* [7].
  74. [74]
    [PDF] Motion Planning with Hybrid Dynamics and Temporal Goals
    Abstract—In this paper, we consider the problem of motion planning for mobile robots with nonlinear hybrid dynamics, and high-level temporal goals.
  75. [75]
    Mixed-integer programming in motion planning - ScienceDirect.com
    This paper presents a review of past and present results and approaches in the area of motion planning using MIP (Mixed-integer Programming).
  76. [76]
    Recent trends in robot learning and evolution for swarm robotics
    Apr 23, 2023 · In this article, I highlight and discuss recent advances and trends in offline robot evolution, embodied evolution, and offline robot learning for swarm ...Missing: 2020s | Show results with:2020s
  77. [77]
    Heterogeneous Multi-Robot Collaboration for Coverage Path ... - MDPI
    Mar 18, 2024 · This research presents a cooperation strategy for a heterogeneous group of robots that comprises two Unmanned Aerial Vehicles (UAVs) and one Unmanned Ground ...
  78. [78]
    Coordination and navigation of heterogeneous MAV–UGV ...
    The aim of the approach is to stabilize a formation of UAVs above UGVs in circular orbits using interconnections of UAV and UGV groups via ground-to-air-only ...
  79. [79]
    Review on Motion Planning of Robotic Manipulator in Dynamic ...
    Nov 21, 2024 · This paper provides a detailed review of motion planning algorithms designed for robotic manipulators working in dynamic environments.
  80. [80]
    Path planning techniques for mobile robots: Review and prospect
    Oct 1, 2023 · Mobile robot systems are broadly divided into three modules, including information perception, path planning and motion control.
  81. [81]
    [PDF] Efficient Locomotion Planning for a Humanoid Robot with Whole ...
    Abstract—In this paper, we propose a locomotion planning framework for a humanoid robot with an efficient footstep and whole-body collision avoidance ...Missing: SLAM AMCL
  82. [82]
    [PDF] Humanoid Motion Planning for Dual-Arm Manipulation and Re ...
    Abstract—In this paper, we present efficient solutions for planning motions of dual-arm manipulation and re-grasping tasks. Motion planning for such tasks ...
  83. [83]
    [PDF] Optimization-based whole-body motion planning for humanoid robots
    Motion planning for high DOF robot systems is a challenging problem to solve in ... Manzo, “The darpa robotics challenge [competitions],” IEEE Robotics Automa-.
  84. [84]
    [2502.02829] Global Contact-Rich Planning with Sparsity ... - arXiv
    Feb 5, 2025 · Abstract:We show that contact-rich motion planning is also sparsity-rich when viewed as polynomial optimization (POP).
  85. [85]
    [PDF] Robust Execution of Contact-Rich Motion Plans by Hybrid Force ...
    The robot needs to tilt and flip a square block about one of its edges by pressing on the block's top surface. We use a simple motion plan: the robot hand moves ...
  86. [86]
    Hierarchical Motion Planning and Offline Robust Model Predictive ...
    Feb 7, 2024 · Then, in the upper layer, the motion planner first generates an optimal trajectory by using the artificial potential field (APF) algorithm to ...
  87. [87]
    Hierarchical Motion Planning and Tracking for Autonomous Vehicles ...
    Apr 17, 2023 · The global heuristic planning based artificial potential field method is developed to generate the real-time optimal motion sequence, and ...
  88. [88]
    Decision making framework for autonomous vehicles driving ...
    Nov 16, 2021 · This paper proposes a decision-making method for autonomous vehicles performance based on finite state machines (FSM), which enables intelligent ...
  89. [89]
    [PDF] Optimal Trajectory Generation for Autonomous Vehicles ... - arXiv
    Dec 3, 2021 · constant-jerk trajectory, the following equality equations must be satisfied between consecutive points: si+1 = si +. ∫ ∆t. 0 ... sidt. = si ...
  90. [90]
    Autonomous navigation at unsignalized intersections: A coupled ...
    This paper develops an integrated safety-enhanced reinforcement learning (RL) and model predictive control (MPC) framework for autonomous vehicles (AVs) to ...
  91. [91]
    A Survey on the Key Technologies of UAV Motion Planning - MDPI
    This paper provides a comprehensive overview of UAV motion planning frameworks, systematically addressing three key components: map representation, path ...
  92. [92]
    UAV path planning techniques: a survey
    Enhancing the efficiency of UAV- based missions through optimization techniques is of paramount significance. In this regard, the path planning problem that ...
  93. [93]
    Autonomous Drone Racing: A Survey - arXiv
    Jul 8, 2024 · Important variants of these algorithms named RRT* and PRM* [108] , can find the optimal path given infinite time. The combinatorial planning ...
  94. [94]
    Data-Driven Motion Planning: A Survey on Deep Neural Networks ...
    Mar 28, 2025 · Different approaches to solving motion planning exist, such as sampling-based motion planning (SBMP), grid-based motion planning, planning as ...<|separator|>
  95. [95]
    An Efficient End-to-end Motion Planner for Autonomous Driving with ...
    Aug 7, 2024 · In this paper, we propose DRAMA, the first Mamba-based end-to-end motion planner for autonomous vehicles. DRAMA fuses camera, LiDAR Bird's Eye ...
  96. [96]
    Real-time Motion Planning for autonomous vehicles in dynamic ...
    Jun 5, 2024 · However, one of the critical challenges in autonomous vehicle operation is trajectory planning, especially in dynamic environments with moving ...Missing: 100Hz V2X agent
  97. [97]
    [PDF] Motion Estimation of Connected and Automated Vehicles under ...
    In these applications of cooperative automated driving, CAVs are coordinated to maintain safe inter-vehicle distances and accomplish tasks together based on V2X.