Fact-checked by Grok 2 weeks ago

Intelligent control

Intelligent control is a discipline within that integrates methodologies to design systems capable of autonomously achieving high-level goals in uncertain, complex, and dynamically changing environments, often by emulating human-like reasoning, learning, and adaptation. Unlike conventional control approaches, which rely on precise mathematical models and predefined rules, intelligent control emphasizes robustness, flexibility, and the ability to handle incomplete or inexact information through techniques such as sensing, planning, and . The field originated in 1971, when Karl S. Fu coined the term to describe the convergence of and , driven by the need for greater in systems amid increasing in applications like and . Early developments focused on hierarchical structures and symbolic reasoning, evolving from —rooted in 19th-century work by James Clerk Maxwell—to incorporate interdisciplinary elements from and . By the 1980s and 1990s, foundational contributions, including those by George N. Saridis on self-organizing systems, established intelligent control as a framework for addressing nonlinear, ill-defined problems where traditional methods fall short. Core techniques in intelligent control include for managing imprecise data, neural networks for and approximation, genetic algorithms for optimization, and knowledge-based expert systems for decision-making under uncertainty. Hybrid approaches, such as neuro-fuzzy controllers and , combine these with to enhance performance in real-time scenarios. These methods enable systems to learn from experience, diagnose faults, and adapt to disturbances, distinguishing intelligent control from rigid, model-dependent paradigms. Applications span diverse domains, including for autonomous navigation and human-robot interaction, process industries for optimization and , renewable energy systems like control, and intelligent transportation for . Recent advances as of 2025 emphasize data-driven integration with and , improving efficiency in , while in , AI-driven automation addresses similar integration for enhanced productivity; challenges such as stability analysis and computational demands persist across these areas.

Fundamentals

Definition and Scope

Intelligent control is a discipline within that seeks to emulate aspects of in automated systems, enabling capabilities such as to changing conditions, learning from , reasoning under , and management of nonlinear without relying on complete mathematical models. This approach originated from the intersection of and automatic control, as first conceptualized by Fu in 1971, who described it as enhancing traditional control to incorporate sensing, reasoning, and adaptive execution in environments with incomplete or inexact information. Unlike rigid, model-based methods, intelligent control prioritizes flexible, robust to achieve desired outcomes in dynamic settings. The scope of intelligent control encompasses applications where conventional control techniques falter, particularly in handling non-minimum phase systems, unstable dynamics, or highly uncertain environments characterized by unmodeled disturbances or nonlinear behaviors. It integrates methods, including and knowledge-based reasoning, to address complex processes that demand autonomy and productivity, such as those in , , and autonomous vehicles. This field extends beyond simple feedback loops to support interdisciplinary solutions for self-organizing or adaptive systems, focusing on long-term performance in the absence of precise a priori models. Key characteristics of intelligent control include its hierarchical structure, which combines low-level mechanisms for execution with high-level for strategic , thereby managing complexity through layered . It exhibits robustness to external disturbances and internal variations by incorporating adaptive strategies that maintain and performance amid uncertainties. Additionally, intelligent control is inherently goal-oriented, allowing systems to pursue objectives proactively without exhaustive predefined rules, often through learning processes that refine behavior over time. A conceptual of intelligent control versus conventional methods appears in systems, where traditional maintains a fixed speed using simple , while intelligent variants employ sensors and AI-driven reasoning to dynamically adjust speed based on uncertainties, surrounding vehicles, and environmental changes for safer, more efficient operation.

Relation to Classical Control

primarily relies on precise mathematical models of the system, such as functions or state-space representations, to design controllers for linear time-invariant (LTI) systems. Common techniques include proportional-integral-derivative () controllers, which adjust the control input based on error, its integral, and derivative, and linear regulator (LQR) methods, which optimize a quadratic cost function subject to linear dynamics. For instance, the control law is given by u(t) = K_p e(t) + K_i \int_0^t e(\tau) \, d\tau + K_d \frac{de(t)}{dt}, where K_p, K_i, and K_d are fixed gains tuned via methods like Ziegler-Nichols, assuming the system model is accurate and stable. These approaches excel in well-understood, predictable environments but require extensive prior knowledge of system parameters. However, classical control faces significant limitations when applied to real-world systems exhibiting unmodeled dynamics, nonlinearity, or time-varying parameters, as these violate the underlying assumptions of linearity and time-invariance. For example, in adaptive scenarios like robotic manipulators with varying payloads or fault-tolerant aerospace systems, fixed-gain controllers like PID or LQR can lead to instability or poor performance due to their inability to accommodate parameter drift or external disturbances without manual retuning. Such shortcomings are particularly evident in high-dimensional or uncertain environments, where deriving exact models becomes infeasible or overly simplistic. Intelligent control addresses these gaps by shifting toward model-free or data-driven paradigms that incorporate adaptive for self-tuning and enhanced robustness, extending classical methods without discarding their foundational principles. Unlike fixed-gain , intelligent approaches enable automatic gain adjustment through learning mechanisms, allowing controllers to handle nonlinearities and variations dynamically—for instance, by using from inputs and outputs to approximate in . This evolution builds on classical control's guarantees while integrating higher-level to manage . Hybrid approaches further bridge the two paradigms by combining low-level classical feedback loops, such as for precise tracking, with intelligent supervisors like or neural networks for oversight and adaptation in complex scenarios. For example, in power plant boiler control, a classical regulator handles basic steam temperature regulation, while an intelligent layer adjusts setpoints to cope with load changes or faults, improving overall efficiency and . These integrations leverage the reliability of classical methods alongside the flexibility of intelligent techniques, as demonstrated in industrial applications like in chemical processes.

Historical Development

Origins

The origins of intelligent control can be traced to the mid-20th century, where foundational concepts emerged from the interdisciplinary fields of and early . In the 1940s and 1950s, Norbert Wiener's development of introduced key ideas of and communication in systems, drawing parallels between mechanical control mechanisms and biological processes to enable adaptive behavior in machines. This framework laid the groundwork for control systems that could handle uncertainty and dynamic environments, influencing subsequent efforts to imbue control with intelligent attributes. Concurrently, Alan Turing's explorations in the 1950s proposed the possibility of machines exhibiting intelligent behavior through computational processes, emphasizing learning and capabilities that would later inform intelligent control paradigms. The term "intelligent control" was coined in 1971 by K.S. Fu to describe the integration of artificial intelligence and automatic control. By the 1970s, the limitations of classical deterministic control methods in complex, uncertain scenarios—particularly in space exploration and military operations—drove the integration of pattern recognition and decision theory into control frameworks, marking the practical emergence of intelligent control. Applications such as NASA's adaptive flight control for high-speed aircraft and military systems requiring robustness against variable conditions highlighted the need for controllers that could learn and adjust without precise models. Foundational work by George N. Saridis in this era introduced concepts of self-organizing systems and analytic formulations for intelligent control, emphasizing hierarchical structures and entropy-based performance measures. A seminal contribution during this period was Ebrahim Mamdani's 1975 work on fuzzy logic controllers, which demonstrated linguistic rule-based synthesis for regulating a steam engine, enabling heuristic decision-making in ill-defined systems and bridging AI techniques with control engineering. The 1980s saw further consolidation through early applications of neural networks to control problems, building on approaches that allowed systems to adapt via trial-and-error in dynamic settings. Pioneering efforts, such as those by Barto, , and Anderson, applied associative reward-penalty mechanisms to cart-pole balancing tasks, illustrating how neural-inspired methods could achieve stable control in nonlinear environments. These developments culminated in the 1990s with formal definitions by Panos Antsaklis and Kevin Passino, who characterized intelligent control as a discipline emulating human-like reasoning, learning, and autonomy to address high-degree uncertainty, as outlined in their 1993 edited volume and the 1994 IEEE task force report. This theoretical foundation shifted control from rigid, model-based strategies to flexible, ones, assuming familiarity with basic while emphasizing the move toward intelligent adaptability.

Key Milestones

The field of intelligent control began to take shape in the 1980s with the practical application of to control systems, building on Lotfi Zadeh's foundational theory introduced in 1965. Early implementations included fuzzy controllers for industrial processes, such as the 1987 deployment of a system for on the in , marking one of the first real-world uses of fuzzy methods for adaptive decision-making in dynamic environments. Concurrently, the late 1980s saw initial experiments with neural networks for control tasks, exemplified by works like Psaltis et al.'s 1988 demonstration of neural network-based for nonlinear systems, which highlighted the potential for learning-based adjustments in uncertain conditions. The 1990s marked a period of institutional consolidation for intelligent control. The IEEE International Symposium on Intelligent (ISIC) was established in 1985, providing a key forum for advancing research in adaptive and autonomous systems, with annual events fostering collaboration among engineers and computer scientists. In 1985, the IEEE Control Systems Society formed the Technical Committee on Intelligent (TCIC), which has since coordinated efforts to integrate techniques into . Influential publications, such as the 1993 edited volume Fuzzy Logic and Control: Software and Hardware Applications by Jamshidi, Vadiee, and Ross, synthesized emerging software and hardware implementations, emphasizing hybrid fuzzy-neural approaches for robust system design. During the 2000s and 2010s, intelligent control evolved through the integration of hybrid systems and , enabling more sophisticated handling of complex, uncertain environments. Hybrid intelligent systems, combining symbolic reasoning with subsymbolic methods like and neural networks, gained prominence, as seen in the 2005 IEEE Transactions on Systems, Man, and special issue on hybrid control architectures for and . 's incorporation into control frameworks accelerated in the 2010s, with Theodorou et al.'s 2010 approach bridging stochastic and for policy optimization in continuous spaces. Practical milestones included the from 2004 to 2007, where autonomous vehicles like Stanford's Stanley demonstrated intelligent control through , path planning, and real-time adaptation, completing off-road and urban navigation tasks that propelled advancements in vehicle autonomy. In the 2020s, intelligent control has increasingly incorporated and considerations of AI ethics, particularly through safe methodologies to ensure reliability in critical applications. has enhanced control in areas like and power systems, as evidenced by Buchli's 2024 keynote overview of learning-based for sequential decision-making under uncertainty. Advancements in safe , such as those reviewed in recent surveys, focus on and risk mitigation, enabling deployment in safety-critical domains like autonomous driving while addressing ethical imperatives for verifiable and equitable AI behaviors.

Core Principles

Adaptivity and Learning

Adaptivity in intelligent control refers to the capability of a control system to autonomously modify its parameters or structure in response to uncertainties, disturbances, or changes in the controlled process, ensuring sustained performance without manual reconfiguration. This adjustment process distinguishes adaptive systems from classical fixed-gain controllers, which assume known and constant dynamics, by incorporating mechanisms that estimate and compensate for model mismatches in . Adaptivity enables systems to operate robustly under varying conditions, such as parameter drifts or unmodeled , by continuously updating the controller based on observed errors or performance metrics. Adaptivity manifests in two primary types: parameter adaptation and structural adaptation. Parameter adaptation involves tuning fixed controller parameters to match an assumed plant model, often converging to optimal values as adaptation effects diminish over time for slowly varying or constant uncertainties. A seminal example is Model Reference Adaptive Control (MRAC), introduced by Whitaker et al. in 1958, where the controller adjusts gains to make the plant's output track that of a , using rules like the MIT rule for gradient-based updates. In contrast, structural adaptation alters the controller's architecture itself, such as switching between control laws or reconfiguring feedback loops, to handle abrupt changes or nonlinearities that parameter tuning alone cannot address; this form requires ongoing adaptation and is suited for highly dynamic environments. Learning mechanisms underpin adaptivity by enabling the system to acquire knowledge from data or interactions, tailored to control contexts. Supervised learning identifies system models from labeled input-output pairs to refine adaptive laws, enhancing accuracy in parameter estimation. Unsupervised learning detects patterns in unlabeled data, such as clustering operational regimes to trigger structural changes without explicit error signals. Reinforcement learning, particularly Q-learning, optimizes control policies in discrete state spaces by iteratively updating action-value functions based on rewards, as demonstrated in adaptive traffic signal control where agents learn to minimize delays through trial-and-error exploration. These mechanisms draw from broader machine learning paradigms but prioritize control-specific objectives like tracking and stability. Neural networks can approximate nonlinear learning functions within these frameworks, though detailed implementations are addressed elsewhere. Key concepts in adaptivity include the distinction between learning. Online learning updates the controller in using from the , allowing immediate response to changes but risking during transients; it is essential for dynamic environments. Offline learning, conversely, trains models on pre-collected datasets before deployment, offering safer initial tuning but limited adaptability to unforeseen variations. guarantees are ensured through Lyapunov-based adaptive laws, which construct a whose time is negative semi-definite, proving bounded errors and . A representative Lyapunov candidate for MRAC is V(e, \tilde{\theta}) = e^T P e + \gamma \tilde{\theta}^T \tilde{\theta}, where e is the tracking error, P is a positive definite matrix solving the Lyapunov equation for the reference model, \tilde{\theta} = \theta - \theta^* is the parameter error, and \gamma > 0 scales the adaptation term; the update law \dot{\tilde{\theta}} = -\Gamma e \phi(x) (with \Gamma > 0 and regressor \phi) ensures \dot{V} \leq -e^T Q e for some Q > 0, guaranteeing asymptotic stability under persistent excitation. In , adaptivity plays a crucial role in managing time-varying plants, where parameters evolve due to , environmental factors, or operational shifts, rendering fixed controllers ineffective. Unlike classical approaches that fail under variations, adaptive methods modify laws to bound errors, as in robust MRAC schemes that incorporate dead-zones or projection operators to prevent parameter drift. This capability extends to systems, ensuring global for plants with matched uncertainties and time-varying structures.

Autonomy and Decision-Making

Autonomy in intelligent control refers to the capability of systems to operate independently in uncertain and dynamic environments, making decisions without constant human intervention. This principle spans various levels of autonomy, ranging from reactive behaviors at the low level, which respond directly to immediate sensory inputs, to deliberative processes at higher levels that involve long-term planning and goal-oriented reasoning. A foundational approach to these levels is the subsumption architecture, proposed by , which structures control as layered behaviors where lower layers handle basic reactivity—such as obstacle avoidance—while higher layers subsume and integrate them for more complex tasks like . This architecture enables progressive autonomy by allowing systems to function robustly even if higher layers are not fully developed, emphasizing emergent over centralized deliberation. Decision-making in autonomous intelligent control often employs hierarchical structures that incorporate for storing domain-specific rules and facts, coupled with to reason about goals and select actions. In such systems, the serves as a repository of symbolic representations, while the applies forward or to derive decisions, enabling the system to evaluate options against objectives like efficiency or safety. For instance, in processes, hierarchical controllers use these components to prioritize tasks, such as adjusting parameters based on inferred environmental changes. This setup supports goal selection by propagating inferences across layers, ensuring decisions align with overarching system objectives in real-time. Key to autonomy are fault tolerance and self-recovery mechanisms, which allow systems to detect anomalies, isolate faults, and restore functionality without external aid, thereby maintaining operational continuity. Intelligent fault-tolerant control integrates diagnostic modules that monitor system states and trigger reconfiguration, such as switching to redundant actuators in robotic arms. For state transitions in autonomous agents, tools like decision trees model branching choices based on sensor data to predict and execute paths, while Petri nets represent concurrent processes and firing rules for transitions, as seen in UAV mission planning where nets synchronize behaviors like and evasion. These methods enhance reliability by formalizing recovery sequences, such as reverting to safe states upon failure detection. Unlike pure adaptation, which focuses on parameter tuning through data-driven methods like online learning, autonomy emphasizes symbolic reasoning and proactive planning to handle unforeseen scenarios. This distinction underscores autonomy's reliance on explicit knowledge representation for deliberative choices, rather than solely reactive adjustments. In Markov decision processes (MDPs), a common framework for modeling autonomous decision-making, action selection maximizes the utility function U(a|s), defined as the expected reward for choosing action a in state s, formalized as: U(a|s) = \mathbb{E} \left[ \sum_{t=0}^{\infty} \gamma^t r(s_t, a_t) \mid s_0 = s, a_0 = a \right] where \gamma is the discount factor and r is the reward function, guiding the policy toward optimal long-term outcomes. Learning can refine these utilities over time, but the core autonomy lies in the planning process itself.

Methods and Techniques

Fuzzy Logic Control

Fuzzy logic control represents a key method in intelligent control systems, leveraging fuzzy set theory to handle uncertainty and imprecision inherent in real-world processes. Introduced by Lotfi A. Zadeh in 1965, fuzzy sets generalize classical sets by assigning membership degrees μ(x) ∈ [0,1] to elements, allowing the modeling of linguistic variables such as "approximately zero" or "very high." This framework enables controllers to process qualitative human knowledge rather than relying solely on quantitative models, making it particularly suitable for systems where exact dynamics are difficult to derive or vary unpredictably. The structure of a fuzzy logic controller typically comprises four main components: fuzzification, , a base, and . Fuzzification transforms crisp input values, such as error and its derivative, into fuzzy sets using predefined membership functions. The base consists of linguistic IF-THEN derived from expert knowledge, for example, "IF error is positive large AND change in error is negative small THEN control output is positive medium." The evaluates these to produce fuzzy outputs, employing methods like the Mamdani approach, which clips or scales output fuzzy sets based on firing strengths, or the Takagi-Sugeno method, which uses linear functions in the rule consequents for smoother approximations. Finally, converts the aggregated fuzzy output into a crisp signal, often via the method: u = \frac{\sum_i \mu_i z_i}{\sum_i \mu_i} where μ_i is the aggregated membership for the i-th output element and z_i is the corresponding crisp value. The Mamdani method, pioneered in 1975 for linguistic synthesis, emphasizes interpretability through symmetric fuzzy outputs, while the Takagi-Sugeno model from 1985 facilitates analytical design by blending local linear models weighted by membership functions. One primary advantage of fuzzy logic control lies in its ability to manage qualitative knowledge for nonlinear plants without requiring precise mathematical models, thereby simplifying design for complex, ill-defined systems. This approach emulates human decision-making, providing robustness to parameter variations and disturbances that challenge classical linear controllers. A representative example is the stabilization of an on a , a benchmark nonlinear system prone to instability; fuzzy controllers achieve balance by tuning cart velocity based on pendulum angle and angular velocity deviations, demonstrating effective performance even with model uncertainties. Variants of fuzzy logic control include adaptive fuzzy systems, which incorporate online mechanisms for rule tuning to enhance performance in dynamic environments. These systems adjust membership functions or rule weights in using or stability-based algorithms, ensuring stability and convergence for time-varying nonlinear plants as established in early adaptive frameworks. Such adaptations maintain the interpretability of fuzzy rules while improving tracking accuracy over fixed-structure controllers.

Neural Network-Based Control

Neural network-based control leverages artificial neural networks (ANNs) to approximate dynamics and implement adaptive controllers, particularly effective for systems where traditional linear models fail to capture complex behaviors. ANNs excel in modeling black-box systems by learning input-output mappings from data, enabling in uncertain or time-varying environments. This approach draws on the connectionist paradigm, where networks process signals through interconnected nodes to achieve beyond training data. Key neural architectures in control include networks for static mappings, recurrent networks such as nonlinear autoregressive with exogenous inputs (NARX) for dynamic systems with feedback loops, and (RBF) networks for localized approximations. networks map inputs directly to outputs, suitable for steady-state control tasks. NARX architectures incorporate past outputs and inputs to predict future states, making them ideal for time-series control in sequential processes. RBF networks use Gaussian basis functions centered at data points to approximate functions locally, offering fast training and properties for real-time applications. These architectures are grounded in the universal , which states that a single layer with a non-constant, bounded, and monotonically increasing continuous can approximate any continuous function on a compact subset of \mathbb{R}^n to arbitrary accuracy, provided sufficiently many neurons are used. Control strategies employing these networks include direct inverse control, where the ANN is trained to invert the plant dynamics, generating control inputs that directly achieve desired outputs without an explicit forward model. In model reference adaptive control (MRAC) augmented with neural networks, an ANN serves as an identifier to estimate unknown parameters online, adjusting the controller to track a reference model's response. Network weights are tuned via , minimizing an error metric E such as between actual and desired outputs. The weight update rule is \Delta w = -\eta \frac{\partial E}{\partial w}, where \eta > 0 is the learning rate, enabling gradient-based adaptation to parameter variations or disturbances. Training paradigms for these controllers often rely on supervised learning, where input-output pairs from the plant are used to identify the system model before control deployment. For instance, in plant identification, the network is trained offline on simulation data or online during operation to refine approximations, ensuring stability through Lyapunov-based guarantees in adaptive schemes. Neural network controllers have been applied to robotic manipulators, where multilayer perceptrons adapt to payload changes and friction for precise trajectory tracking, outperforming PID controllers in nonlinear regimes. Similarly, in aircraft autopilots, recurrent networks handle aerodynamic uncertainties for robust altitude and attitude control, as demonstrated in simulations under wind disturbances. These examples illustrate ANNs' utility in black-box scenarios, where physical models are unavailable or overly complex. Hybrids with can enhance interpretability by combining neural learning with rule-based structures, though pure neural approaches dominate for data-rich control tasks.

Probabilistic and Bayesian Approaches

Probabilistic and Bayesian approaches in intelligent control provide a mathematical for handling , making decisions under incomplete , and updating beliefs based on new , which is essential for systems operating in noisy or environments. At the core of this framework is , which combines a distribution \pi(\theta) over model parameters \theta with the likelihood p(y|\theta) of observed y to compute the posterior p(\theta|y) \propto \pi(\theta) p(y|\theta). This posterior enables parameter estimation in control systems, allowing controllers to adaptively refine models of dynamic processes, such as in where are estimated amid sensor noise. In control applications, Bayesian methods extend classical filtering techniques, such as through Bayesian variants of the , to perform state estimation in nonlinear or non-Gaussian systems, improving robustness in . For instance, particle filters, a implementation of , are used for tracking and prediction in stochastic environments, outperforming traditional s in scenarios with multimodal uncertainties. Bayesian controllers also facilitate in (RL), where policies are optimized by balancing exploration and exploitation through posterior sampling, leading to more sample-efficient learning in tasks like robotic manipulation. Additionally, partially observable Markov decision processes (POMDPs) model control problems with hidden states, using belief states derived from Bayesian updates to compute optimal policies via methods like point-based value iteration, which has been applied to in uncertain terrains. A practical example is in , where integrates noisy measurements from GPS, IMU, and visual sensors to estimate position and velocity, enabling stable flight in windy conditions and significantly reducing estimation error compared to deterministic fusion methods. As a variant, Gaussian processes (GPs) offer nonparametric Bayesian modeling for , representing as a distribution over functions, which is particularly useful for data-driven in unknown environments like adaptive tuning of controllers. GPs enable in predictions, supporting safe exploration in frameworks. Neural Bayesian networks, which combine these with neural architectures for scalable inference, are explored in related neural methods but remain distinct in their explicit probabilistic foundations.

Evolutionary and Swarm Intelligence Methods

Evolutionary algorithms, inspired by and , provide a population-based optimization framework for designing intelligent controllers by iteratively evolving candidate solutions to meet control objectives. These methods are particularly useful in complex, problems where traditional gradient-based techniques may converge to optima. In intelligent control, evolutionary algorithms optimize controller parameters or structures by evaluating based on performance metrics, such as the integral of squared error (ISE), defined as \text{ISE} = \int_0^\infty e^2(t) \, dt, where e(t) is the . Genetic algorithms (GAs), a prominent class of evolutionary algorithms, operate through mechanisms of selection, crossover, and to mimic biological . In the selection phase, superior individuals from the —representing potential controller configurations—are chosen probabilistically based on their scores, favoring those that minimize errors or maximize margins. Crossover combines features from two selected parents to generate , promoting in the search space, while introduces random variations to prevent premature . Seminal work by Chipperfield and Fleming demonstrated GAs' efficacy in engineering applications, including multivariable systems, by tuning parameters to achieve robust performance. Swarm intelligence methods, drawing from collective behaviors in nature such as bird flocking, offer another bio-inspired approach for controller optimization. (PSO), introduced by and Eberhart, initializes a swarm of particles in the parameter space, each representing a candidate controller solution, and updates their positions iteratively toward personal and global optima. The velocity update equation is given by
\mathbf{v}_i^{t+1} = w \mathbf{v}_i^t + c_1 r_1 (\mathbf{pbest}_i - \mathbf{x}_i^t) + c_2 r_2 (\mathbf{gbest} - \mathbf{x}_i^t),
where w is the inertia weight, c_1 and c_2 are cognitive and social coefficients, r_1 and r_2 are random scalars in [0,1], \mathbf{pbest}_i is the particle's best position, \mathbf{gbest} is the swarm's global best, and \mathbf{x}_i^t is the current position. PSO has been widely applied to tune proportional-integral-derivative () controllers, achieving faster convergence than traditional methods in nonlinear systems.
In control applications, evolutionary and swarm methods excel at evolving fuzzy rules or neural network weights to enhance controller adaptability. For instance, GAs can optimize the parameters of fuzzy inference systems by evolving rule bases that improve decision-making in uncertain environments, as shown in early applications to fuzzy control design. Similarly, these algorithms adjust neural network weights in adaptive controllers, enabling online learning for dynamic processes without requiring derivative information. Multi-objective formulations extend this capability, balancing trade-offs like performance and robustness in controller design; for example, non-dominated sorting genetic algorithms (NSGA-II) generate Pareto-optimal solutions for robust control under uncertainty. A representative example is the use of GAs to tune PID controllers for chemical processes, such as batch reactors, where global optimization handles nonlinear dynamics and parameter interactions effectively. In one study, GA-optimized PID parameters reduced settling times and overshoot compared to Ziegler-Nichols tuning in a pH neutralization process, demonstrating improved tracking under varying operating conditions. These techniques underscore the role of evolutionary and swarm intelligence in creating scalable, robust intelligent controllers for real-world systems.

Applications

Industrial and Process Control

Intelligent control techniques have been widely adopted in industrial and process to manage complex, nonlinear systems where traditional methods fall short, particularly in and chemical environments. In cement production, control has proven effective for optimizing operations, which are critical for clinker formation and account for a significant portion of energy use in the plant. By emulating human decision-making through linguistic rules, fuzzy controllers adjust fuel flow, air intake, and material feed rates in real-time to maintain optimal temperature profiles despite disturbances like varying raw material composition. A notable implementation at the Oregon Portland Cement Company's Durkee Plant utilized to automate control, achieving fuel efficiency savings of 3-4%. Similarly, an interval type-2 fuzzy controller applied to the calciner stage of cement production enhanced clinker quality and reduced by stabilizing processes under . Neural network-based approaches further advance predictive maintenance in assembly lines, where equipment failures can halt production and incur high costs. These networks analyze sensor data from vibration, temperature, and acoustics to forecast component degradation, enabling proactive interventions. In manufacturing settings, deep echo state networks have been employed to detect anomalies in production line energy consumption, allowing for early fault identification that minimizes unplanned stoppages. Such systems typically reduce downtime compared to reactive maintenance strategies, as demonstrated in comparative studies of deep learning models for industrial sensor data. In chemical process , intelligent enhancements to (MPC) address multivariable interactions in columns, where coupled variables like ratio, feed flow, and pressure must be optimized for product purity and . AI-augmented MPC uses s to predict dynamic responses and refine control horizons, improving handling in nonlinear regimes. For instance, a dynamic integrated with MPC for an industrial column achieved better tracking and disturbance rejection, leading to more efficient separation processes. Another application in LPG showed that AI-enhanced MPC reduced output fluctuations and increased stability. The benefits of intelligent control in these domains include reduced operational , substantial energy savings, and enhanced . A case in point is the rolling mill, where adaptive speed regulation via neural networks synchronizes multiple stands to handle varying strip thicknesses and tensions, preventing defects like or . This approach, implemented in a multi-stand rolling setup, improved speed accuracy and throughput by dynamically compensating for load variations. In real-world plants, intelligent controllers have also demonstrated metrics like 30% reductions in and overshoot for multivariable processes, ensuring faster stabilization and minimal deviation from setpoints. In renewable energy systems, intelligent control optimizes operations such as wind turbine pitch and yaw adjustments using fuzzy logic and neural networks to maximize power output under varying wind conditions, as seen in applications achieving improved efficiency and fault tolerance.

Robotics and Autonomous Systems

Intelligent control plays a pivotal role in robotics and autonomous systems, enabling navigation, adaptation, and interaction in unstructured environments where traditional rule-based methods fall short. By integrating adaptive learning and probabilistic reasoning, these systems process sensor data in real time to make decisions amid uncertainty, such as varying terrain or dynamic obstacles. For instance, reinforcement learning (RL) has been instrumental in developing locomotion controllers for quadruped robots, allowing them to achieve high-speed, agile movements on natural surfaces like grass or gravel. In one implementation, an end-to-end RL policy trained in simulation was transferred zero-shot to the MIT Mini Cheetah, enabling it to sprint at speeds up to 3.9 m/s on flat ground and maintain an average of 3.4 m/s on uneven terrain, demonstrating robust performance through adaptive velocity curricula and online system identification. Similarly, Bayesian filtering techniques underpin simultaneous localization and mapping (SLAM) for mobile robots, providing probabilistic estimates of position and environment to handle localization errors in unknown spaces. The extended Kalman filter (EKF-SLAM) assumes Gaussian noise to recursively update robot state and map features, reducing landmark variance monotonically and enabling real-time indoor navigation without divergence, as validated in large-scale experiments. Particle filter-based methods like FastSLAM further enhance this by sampling trajectories and using EKFs for map updates, achieving linear computational complexity and superior handling of non-Gaussian uncertainties in outdoor datasets such as Victoria Park. These approaches allow mobile robots to build accurate maps while localizing with correlated observations, crucial for autonomous exploration. In autonomous vehicles, (NN)-based path planning facilitates safe trajectory generation by directly mapping sensor inputs to outputs, bypassing explicit . A fully convolutional NN (FCNN) approach, for example, processes grid-based occupancy maps to output probability distributions for lowest-cost and shortest paths, achieving a 95.1% success rate for cost-optimal planning and processing speeds of over 3000 steps per second on maps up to 80x80 cells. This end-to-end method incorporates traversability costs, yielding paths with length ratios near 1.0 relative to optima, and supports flexible map shapes for real-world deployment in self-driving cars. extends this to multi-robot coordination, where decentralized algorithms inspired by natural enable collective behaviors like exploration and perimeter defense without central . Using local communication via mesh networks, such systems scale to dozens of robots, maintaining robustness to agent failures and adapting to dynamic tasks through modular software frameworks like the "marabunta" . Case studies illustrate the impact of hybrid intelligent controls combining these techniques. The 2005 DARPA Grand Challenge winner, Stanley, integrated probabilistic perception with for obstacle detection—reducing false positives to 0.002% via and classifiers—and real-time planning at 10 Hz using unscented Kalman filtering for state estimation, allowing it to complete a 132-mile course at average speeds exceeding 10 mph. The 2007 Urban Challenge built on this with similar hybrid systems in vehicles like Mellon University's , which used probabilistic for urban navigation and obstacle avoidance. In modern UAV swarms, intelligent control enables coordinated missions in uncertain airspace; for search-and-rescue operations, AI-driven swarms survey vast areas with real-time data fusion, while in , optimizes crop monitoring by dynamically allocating tasks among drones. These examples highlight decentralized decision-making for , where collective perception adapts to fires or weather patterns. In intelligent transportation systems, intelligent control supports through adaptive signal timing and vehicle-to-infrastructure communication, using to reduce congestion and improve flow in urban settings. Addressing real-time decision-making in sensor-rich, uncertain terrains remains a core challenge that intelligent control mitigates through edge and low-latency . By deploying lightweight models on local , such as neural processing units achieving 5 TOPS/W efficiency, robots respond in under 10 ms to hazards like sudden obstacles on rugged ground, as seen in autonomous drones identifying heat signatures during search-and-rescue in hilly terrains. This approach ensures reliability in remote areas without cloud dependency, reducing model sizes by up to 75% via quantization while preserving accuracy for collision avoidance at speeds. Such capabilities draw on principles to enable proactive adaptation, underscoring intelligent control's role in scaling robotic systems to complex, real-world scenarios.

Challenges and Future Directions

Current Limitations

Intelligent control systems, particularly those leveraging models, face significant computational challenges in meeting requirements for control loops. Deep neural networks often demand substantial processing power for inference and optimization, which can exceed the sampling intervals in applications like (MPC), where solving optimization problems must occur within milliseconds to ensure stability. This issue is exacerbated in environments, where from data transmission and limited hardware resources hinders deployment in resource-constrained devices, such as controllers in industrial automation. Reliability and remain critical barriers, primarily due to the black-box nature of neural network-based controllers, which obscures processes and erodes trust in high-stakes operations. The lack of interpretability complicates and , as millions of parameters in deep models make it difficult to trace errors or biases, potentially leading to unexpected failures. In safety-critical domains like , certification is particularly arduous; standards such as struggle to accommodate the non-deterministic behavior of systems, requiring exhaustive validation against adversarial inputs and domain shifts that current frameworks inadequately address. Additional limitations include heavy in learning-based controls, where performance hinges on high-quality, diverse datasets for training, and risks of that degrade generalization to unseen scenarios. arises from complex models fitting noise in limited data, necessitating techniques like ensemble methods to mitigate variance, yet this increases computational overhead. poses further issues for large systems, such as multi-agent networks, where the exponential growth in interactions renders centralized control intractable and distributed algorithms inefficient without approximations like . These challenges have manifested in practical examples, notably early autonomous vehicle trials where systems failed due to unmodeled edge cases, such as unexpected environmental interactions or rare traffic patterns not captured in training data, leading to collisions or navigation errors. For instance, vision-based models in self-driving cars have demonstrated reliable performance in nominal conditions but faltered in atypical scenarios like reflective surfaces or , highlighting the gap between simulated and real-world robustness. One prominent emerging trend in intelligent control involves the integration of to enhance (RL) frameworks, enabling faster adaptation in complex, high-dimensional systems. Quantum-enhanced control leverages quantum neural networks to reduce action spaces logarithmically, achieving up to four times fewer parameters than classical models while accelerating training through techniques like the parameter shift rule. This approach has shown promise in applications such as multi-robot coordination in smart factories and cube-satellite attitude control, where and are critical for . Similarly, draws from to implement brain-like hardware, such as memristive arrays, that mitigate the bottleneck and support lifelong in control systems. By incorporating spiking dynamics and sparse communication inspired by biological neural networks, these systems facilitate adaptive, energy-efficient controllers for embodied agents, addressing challenges like catastrophic forgetting in dynamic environments. Complementing these advancements, safe () with ensures reliable operation by quantifying violation risks and providing provable safety guarantees. Recent methods transform formal specifications into learned policies, verifying non-deterministic behaviors in dynamic controllers to prevent unsafe actions during . For instance, Lyapunov-based constraints have been applied to guarantee in complex dynamical systems, bridging learning with rigorous proofs for applications in autonomous systems. Another key trend is the adoption of explainable (XAI) to develop interpretable controllers, incorporating neuroscience-inspired models like and place cells to elucidate in robotic . This cross-disciplinary approach enhances , allowing observable behaviors to explain internal states and improve in -driven control. Edge further supports decentralized systems by processing data locally on resource-constrained devices, reducing and needs while enabling federated updates for privacy-preserving control in distributed networks. Bio-inspired controls, particularly from , are gaining traction for their adaptability in , emulating mechanisms like to enable robust locomotion and sensory feedback in . Post-2020 developments have advanced for distributed control, where edge devices collaboratively train models without sharing raw data, enhancing and optimal power flow in smart grids. This paradigm has been applied across generation, transmission, and consumption stages, mitigating vulnerabilities like false data injection attacks while preserving . In climate modeling, integrations improve predictive accuracy by fusing multi-source data for extreme event , such as floods and heatwaves, with models outperforming traditional simulations in efficiency and resolution. Looking ahead, hybrid human-AI control loops project enhanced through interdependent performance dynamics, where and factors enable seamless . Empirical analyses reveal non-linear interrelations among 24 performance elements, emphasizing human oversight for in AI-augmented workflows. This evolution supports scalable, ethical systems in domains like wearable and urban mobility, fostering resilient intelligence.

References

  1. [1]
    [PDF] DEFINING INTELLIGENT CONTROL
    Research areas relevant to intelligent control, in addition to conventional control include areas such as planning, learning, search algorithms, hybrid systems, ...
  2. [2]
    Introduction: Overview of Intelligent Controls - SpringerLink
    Actually, intelligent control is an enhancement of traditional control to include the ability to sense and reason about the environment with incomplete and ...
  3. [3]
  4. [4]
  5. [5]
    (PDF) Intelligent Control Systems - ResearchGate
    Aug 6, 2025 · ways to achieve the control goal are called as "intelligent control systems". ... over time and are fuzzy and sometimes contradictory. ... in the ...
  6. [6]
    Advances in Intelligent Control and Engineering Applications - MDPI
    This special issue covers intelligent control in areas like service, robotics, renewable energy, and transport, with ten papers presenting new methodologies ...
  7. [7]
    A comprehensive review of recent advances in intelligent controller ...
    Oct 28, 2025 · This study reports advancements in intelligent control methods and smart irrigation systems using a hybrid review methodology that combines ...
  8. [8]
    None
    ### Summary of Intelligent Control from the Document
  9. [9]
    Complex system and intelligent control: theories and applications
    Intelligent control is a kind of interdisciplinary new technology, which integrates the knowledges of control theory, computer science, artificial intelligence, ...
  10. [10]
    Classical Control System - an overview | ScienceDirect Topics
    Classical control theory suffers from some limitations due to the assumptions made for the control system such as linearity, time invariance, etc. These ...
  11. [11]
    LQR based improved discrete PID controller design via optimum ...
    Mar 15, 2013 · Classical optimal control theory has evolved over decades to formulate the well known Linear Quadratic Regulators which minimizes the excursion ...
  12. [12]
    [PDF] A Comparative Study Between a Classical and Optimal Controller ...
    LQR focuses on non-linear models rather than the classical linear equation approach of PID. The main drawback of PID controllers is that every test on the ...
  13. [13]
    Lqr-based PID controller with variable load tuned with data-driven ...
    Dec 8, 2023 · The LQR provides an efficient way to tune the parameters of a PID controller. The LQR has the disadvantage of requiring accurate models of the ...
  14. [14]
  15. [15]
    [PDF] Data-Driven Control: Overview and Perspectives - NSF PAR
    Data-driven techniques such as machine learning algorithms can provide complementary tools and insights to classical model-based control by enhancing the ...
  16. [16]
    The data-driven approach to classical control theory - ScienceDirect
    A data-driven approach to control design has been developing, since the early 1990's, upon the concepts and the methods of classical control theory.
  17. [17]
    Intelligent Model‐Free Data‐Driven Robust Control Based on ...
    Oct 22, 2025 · In this paper, a new intelligent discrete fractional model-free backstepping sliding mode controller is proposed to enhance the performance and ...
  18. [18]
    Intelligence-based hybrid control for power plant boiler - IEEE Xplore
    A hybrid classical/fuzzy control methodology is presented to integrate low-level machine control and high-level supervision for the steam temperature and ...Missing: approaches | Show results with:approaches
  19. [19]
    Hybrid Intelligent MPC In Industry | IEEE Conference Publication
    The paper deals with current research challenges in modeling and constrained predictive control as applied to a nonlinear process used in chemical industry.
  20. [20]
    Fuzzy logic | Mathematics, AI & Decision Making - Britannica
    Sep 16, 2025 · In 1965 Lotfi Zadeh, an engineering professor at the University of California at Berkeley, proposed a mathematical definition of those classes ...Missing: milestones | Show results with:milestones
  21. [21]
    Fuzzy control - Scholarpedia
    Oct 21, 2011 · In 1974, the first successful application of fuzzy logic to the control of a laboratory-scale process was reported (Mamdani and Assilian 1975).
  22. [22]
    History | IEEE Control Systems Society
    Past Symposia on Intelligent Control. The Technical Committee on Intelligent Control ... 1990 IEEE International Symposium on Intelligent Control GC: A ...Missing: formation | Show results with:formation
  23. [23]
    Fuzzy Logic and Control: Software and Hardware Applications, Vol ...
    Fuzzy Logic and Control: Software and Hardware Applications, Vol. 2. By Mohammad Jamshidi, Nader Vadiee, Timothy Ross; Published Jun 7, 1993 by Pearson.
  24. [24]
    [PDF] A Generalized Path Integral Control Approach to Reinforcement ...
    Equipped with the theoretical framework of stochastic optimal control with path integrals, we can now turn to its application to reinforcement learning with ...
  25. [25]
    The DARPA Grand Challenge: Ten Years Later
    Mar 13, 2014 · The DARPA Grand Challenge, a first-of-its-kind race to foster the development of self-driving ground vehicles.Missing: intelligent control
  26. [26]
    Jonas Buchli - The State of Optimal and Learning Control In The 2020s
    Nov 7, 2024 · In this thought-provoking keynote from the L4DC 2024 event, Jonas Buchli explores the rapid advancements in solving complex sequential ...
  27. [27]
    [PDF] A Historical Perspective of Adaptive Control and Learning - arXiv
    Feb 22, 2022 · Abstract. This article provides a historical perspective of the field of adaptive control over the past seven decades and its intersection ...
  28. [28]
    (PDF) Introduction to Adaptive Control - ResearchGate
    The aim of this introductory chapter is to emphasize the basic concepts pertinent to adaptive control and to present the significant adaptive control schemes.
  29. [29]
    Adaptive Control - an overview | ScienceDirect Topics
    An adaptive control system can be defined as a feedback control system intelligent enough to adjust its characteristics in a changing environment.
  30. [30]
    Adaptive Control Systems - an overview | ScienceDirect Topics
    An adaptive control system is one in which the controller parameters are adjusted automatically to compensate for changing process conditions.
  31. [31]
    Supervised Learning in Model Reference Adaptive Sliding Mode ...
    May 28, 2024 · This paper investigates the applicability of the Brandt-Lin (BL) learning algorithm, mathematically equivalent to the back-propagation algorithm, in adaptive ...
  32. [32]
    Unsupervised Learning of Visual 3D Keypoints for Control - arXiv
    Jun 14, 2021 · In this work, we propose a framework to learn such a 3D geometric structure directly from images in an end-to-end unsupervised manner.
  33. [33]
  34. [34]
    [PDF] Reinforcement learning is direct adaptive optimal control
    We view reinforcement learning methods as a computationally simple, direct approach to the adaptive optimal control of nonlinear systems. For concreteness, we ...
  35. [35]
    A Survey of Offline and Online Learning-Based Algorithms for ... - arXiv
    Feb 6, 2024 · In offline learning, the system may be trained either using collected and/or provided data (supervised learning), or, alternatively, by using ...
  36. [36]
  37. [37]
    Model reference adaptive control for nonlinear time‐varying hybrid ...
    May 25, 2023 · This paper presents the first model reference adaptive control system for nonlinear, time-varying, hybrid dynamical plants affected by matched and parametric ...
  38. [38]
    [PDF] A Robust Layered Control System for a Mobile Robot
    We call this architecture a subsumption architecture. In such a scheme we have a working control system for the robot very early in the piece as soon as we ...
  39. [39]
    [PDF] How to Build Complete Creatures Rather than Isolated Cognitive ...
    The two key aspects of the subsumption architecture are that (a) it imposes a layering methodology in building intelligent control programs, and that (b) within ...
  40. [40]
    Hierarchical, Knowledge-Based Control in Turning
    An intelligent supervisor coordinates these subsystems, combining a frame based knowledge base with a real-time inference engine. The intelligent supervisor ...
  41. [41]
    (PDF) Knowledge base learning control system-part 2 - ResearchGate
    The controller is composed of a rule base, a fact base and an inference engine. ... hierarchical levels in machine intelligence, with knowledge representing the ...
  42. [42]
    Review Development of Intelligent Fault-Tolerant Control Systems ...
    Mar 15, 2024 · A fault-tolerant control system can recover from the loss of a single component without compromising the system's ability to operate or remain ...
  43. [43]
    Implementing Autonomous Driving Behaviors Using a Message ...
    This executive module uses hierarchical interpreted binary Petri nets (PNs) to define the behavior expected from the car in different scenarios according to the ...
  44. [44]
    An Intelligent Algorithm for Decision Making System and Control of ...
    Feb 19, 2021 · Each transition of the fuzzy Petri net is defined as a rule that provides a change in a certain state [16]. Formally, the fuzzy Petri net is a ...
  45. [45]
    [PDF] Relevance-Weighted Action Selection in MDP's
    We present comparative experiments in the maze, cart control, and pendulum swing-up domains that substantiate the benefits of relevance-weighted action ...
  46. [46]
    Fuzzy sets - ScienceDirect
    View PDF; Download full issue. Search ScienceDirect. Elsevier · Information and Control · Volume 8, Issue 3, June 1965, Pages 338-353. Information and Cont…
  47. [47]
    An experiment in linguistic synthesis with a fuzzy logic controller
    This paper describes an experiment on the “linguistic” synthesis of a controller for a model industrial plant (a steam engine)
  48. [48]
    Fuzzy identification of systems and its applications to modeling and ...
    This paper presents a tool to build a fuzzy model using fuzzy implications and reasoning, and shows system identification using input-output data.Missing: URL | Show results with:URL
  49. [49]
    [PDF] Employment of fuzzy logic in the control of the inverted pendulum
    This paper presents the systematic design of a fuzzy logic controller for a car- pendulum mechanical system, well known in the literature as the inverted.
  50. [50]
    Fuzzy Logic Controllers. Methodology, Advantages and Drawbacks
    Feb 13, 2016 · The advantages of fuzzy logic controllers in structural control include flexibility, ease of implementation, robustness, and interpretability.
  51. [51]
    Neural networks for control systems—A survey - ScienceDirect
    This paper focuses on the promise of artificial neural networks in the realm of modelling, identification and control of nonlinear systems.
  52. [52]
  53. [53]
    Radial Basis Function (RBF) Neural Network Control for Mechanical ...
    This book introduces design methods and MATLAB simulation of stable adaptive RBF neural control strategies for mechanical systems, including robot manipulators ...
  54. [54]
  55. [55]
  56. [56]
    [PDF] AN INTRODUCTION TO THE USE OF NEURAL NETWORKS IN ...
    The purpose of this paper is to provide a quick overview of neural networks and to explain how they can be used in control systems.
  57. [57]
    Robot manipulator control using neural networks: A survey
    Apr 12, 2018 · In this paper, we make a relatively comprehensive review of research progress on controlling these robot manipulators by means of neural networks.Missing: controllers | Show results with:controllers
  58. [58]
  59. [59]
    (PDF) Genetic Algorithms In Control Systems Engineering
    Mar 1, 2017 · Fleming and Chipperfield [3] extensively studied the applicability of genetic algorithms in engineering, specially in job shop scheduling, ...
  60. [60]
    Genetic algorithms in control systems engineering: a brief introduction
    Abstract: Genetic algorithms (GA) are adaptive search techniques, based on the principles of natural genetics and natural selection, which, in control ...Missing: seminal papers
  61. [61]
    Genetic algorithms in control systems engineering - ePrints Soton
    Feb 23, 2023 · Chipperfield, A. and Fleming, P. (1995) Genetic algorithms in control systems engineering. Control and computers, 23 (3), 88-94.
  62. [62]
    Tuning of PID Controller Using Particle Swarm Optimization (PSO)
    Aug 6, 2025 · The aim of this research is to design a PID Controller using PSO algorithm. The model of a DC motor is used as a plant in this paper.
  63. [63]
    Evolutionary Algorithms for Fuzzy Control System Design
    Aug 5, 2025 · This paper provides an overview on evolutionary learning methods for the automated design and optimization of fuzzy logic controllers.
  64. [64]
    Evolutionary Computation and Its Applications in Neural and Fuzzy ...
    Oct 13, 2011 · Evolutionary algorithms are a major approach to adaptation and optimization. In this paper, we first introduce evolutionary algorithms with ...
  65. [65]
    MRCD: a genetic algorithm for multiobjective robust control design
    A genetic algorithm (GA) for the class of multiobjective optimization problems that appears in the design of robust controllers is presented in this paper.
  66. [66]
    [PDF] Genetic Algorithm Based PID Optimization in Batch Process Control
    This paper proposes a new genetic algorithm (GA) optimizer ... First of all, PID parameters are manually tuned using all the nominal process parameters and Fig.Missing: example | Show results with:example
  67. [67]
    Application of a GA-Optimized NNARX controller to nonlinear ...
    In this work, a neural network-based nonlinear controller was tested on two nonlinear chemical and biochemical processes. The controller was tuned using genetic ...
  68. [68]
    Automatic Kiln Control at Oregon Portland Cement Company's ...
    ... savings of about three to four percent in fuel efficiency is achieved. In addition, indications are that brick savings are substantial, as high as 50 percent.Missing: energy | Show results with:energy
  69. [69]
    DeepESN Neural Networks for Industrial Predictive Maintenance ...
    Sep 26, 2024 · This paper proposes a deep echo state network (DeepESN)-based method for anomaly detection by analyzing energy consumption data sets from production lines.
  70. [70]
    Comparison of deep learning models for predictive maintenance in ...
    Jul 2, 2025 · This paper presents a comprehensive comparison of deep learning models for predictive maintenance (PdM) in industrial manufacturing systems using sensor data.Missing: assembly | Show results with:assembly
  71. [71]
    Development of model predictive control system using an artificial ...
    Dec 20, 2020 · A new methodology was developed to control a distillation column more efficiently. A Dynamic neural network that can predict future response was modeled.
  72. [72]
    AI enhanced model predictive control for optimizing LPG recovery ...
    Aug 10, 2025 · Operation of distillation columns using model predictive control based on dynamic mode decomposition method. Ind. Eng. Chem. Res. 62(50) ...
  73. [73]
    Speed Control of Steel Rolling Mill using Neural Network
    May 26, 2024 · In this paper a fully neural network-based structure have been proposed to control speeds of rolling stands of a steel rolling mill.
  74. [74]
    intelligent process control by industrial programmable controllers
    Dec 15, 2020 · ... control with FLC is estimated on the average with. 30% decrease of settling time and overshoot. The system. with the incremental SI FC ...
  75. [75]
    [PDF] Rapid Locomotion via Reinforcement Learning - Robotics
    This work has shown that a neural network controller trained fully end-to-end in simulation can push a small quadruped to the limits of its agility, achieving ...
  76. [76]
    [PDF] Simultaneous Localisation and Mapping (SLAM) - People @EECS
    This paper showed that as a mobile robot moves through an unknown environment taking relative observa- tions of landmarks, the estimates of these landmarks are.<|separator|>
  77. [77]
    End-to-End One-Shot Path-Planning Algorithm for an Autonomous ...
    This work introduces an end-to-end path-planning algorithm based on a fully convolutional neural network (FCNN) for grid maps with the concept of the ...
  78. [78]
    Swarm-Enabling Technology for Multi-Robot Systems - Frontiers
    A swarm robotics system consists of autonomous robots with local sensing and communication capabilities, lacking centralized control or access to global ...Missing: papers | Show results with:papers
  79. [79]
    [PDF] Stanley: The robot that won the DARPA Grand Challenge
    Oct 8, 2005 · This article describes the robot Stanley, which won the 2005 DARPA Grand Challenge. Stanley was developed for high-speed desert driving ...Missing: 2005-2007 hybrid
  80. [80]
    UAV swarms: research, challenges, and future directions
    Jan 28, 2025 · This paper provides a comprehensive exploration of UAV swarm infrastructure, recent research advancements, and diverse applications.
  81. [81]
    Optimizing Edge AI for Effective Real-time Decision Making in ...
    Mar 14, 2025 · Edge AI empowers robots to react in milliseconds, enabling life-saving actions in critical scenarios like autonomous vehicle collision avoidance and rapid ...
  82. [82]
    A Survey on Large-Population Systems and Scalable Multi-Agent ...
    Sep 8, 2022 · However, the key issue of scalability complicates the design of control and reinforcement learning algorithms particularly in systems with large ...
  83. [83]
    Trends in quantum reinforcement learning: State‐of‐the‐arts and the ...
    Oct 3, 2024 · This paper presents the basic quantum reinforcement learning theory and its applications to various engineering problems.
  84. [84]
  85. [85]
    [PDF] Verified Safe Reinforcement Learning for Neural Network Dynamic ...
    However, training a controller that can be formally verified to be safe remains a major challenge. We introduce a novel approach for learning verified safe ...
  86. [86]
  87. [87]
    Introducing edge intelligence to smart meters via federated split ...
    Oct 19, 2024 · The authors propose an end-edge-cloud federated split learning framework to introduce edge intelligence, reducing memory, training time, and communication ...
  88. [88]
    Artificial Intelligence-Driven and Bio-Inspired Control Strategies for ...
    Jul 28, 2025 · The convergence of AI, bio-inspired systems, and quantum computing is poised to enable sustainable, autonomous, and human-centric robotics, yet ...
  89. [89]
    Federated Learning for Smart Grid: A Survey on Applications and Potential Vulnerabilities
    ### Summary of Federated Learning Applications in Smart Grids for Distributed Control, Post-2020 Advances
  90. [90]
    Advancements and challenges of artificial intelligence in climate ...
    May 19, 2025 · Artificial intelligence (AI) has transformed climate modeling by improving predictive accuracy, processing efficiency, and data integration. The ...
  91. [91]
    Uncovering the dynamics of human-AI hybrid performance
    Human-AI collaboration is an increasingly important area of research as AI systems are integrated into everyday workflows and moving beyond mere automation ...