Fact-checked by Grok 2 weeks ago

Regular grammar

In and formal language theory, a regular grammar is a type of that generates regular languages, the simplest class in the of formal grammars. These languages consist of all strings that can be recognized by finite automata, and regular grammars are characterized by production rules that are either right-linear or left-linear, ensuring they produce only patterns describable by finite state machines. Formally, a regular grammar G is defined as a four-tuple G = (N, \Sigma, P, S), where N is a finite nonempty set of nonterminal symbols, \Sigma is a finite set of terminal symbols (the ), P is a finite set of production rules of the form A \to aB, A \to a, or A \to \epsilon (right-linear) or A \to Ba, A \to a, or A \to \epsilon (left-linear), where A, B \in N, a \in \Sigma, and \epsilon is the , and S \in N is the start symbol. The language generated by G, denoted L(G), comprises all strings in \Sigma^* that can be derived from S using the rules in P. This structure limits the grammar to linear dependencies, preventing the expression of more complex nested or recursive patterns found in higher levels of the , such as context-free languages. Regular grammars are equivalent to other formalisms for regular languages, including finite state automata and regular expressions, meaning any language describable by one can be precisely captured by the others through constructive conversions. For instance, a can be transformed into a regular grammar by creating nonterminals for each state and productions mirroring the transition function, with accepting states yielding terminal productions. Conversely, regular grammars can be converted to nondeterministic finite automata, where nonterminals become states and rules define transitions. This equivalence underscores their foundational role in , enabling efficient parsing and pattern matching in applications like in compilers.

Fundamentals

Definition

A formal grammar, also known as a , is a mathematical system used to describe the syntax of formal languages. It consists of four components: a V of nonterminal symbols (variables), a \Sigma of terminal symbols (the alphabet of the language), a P of production rules of the form A \to \alpha where A \in V and \alpha is a string over V \cup \Sigma, and a distinguished start symbol S \in V. These elements define how strings in the language can be generated by starting from the start symbol and repeatedly applying the production rules to replace nonterminals with other strings until only terminals remain. A regular grammar is a restricted subclass of formal grammars, specifically classified as Type-3 grammars in the . Introduced by in his 1956 paper "Three Models for the Description of Language," regular grammars were proposed as a formal device capable of generating exactly the regular languages, which are the simplest class in the hierarchy. In a regular grammar, every production rule must adhere to one of two linear forms: right-linear rules of the form A \to aB, A \to a, or A \to \epsilon (where A, B \in V, a \in \Sigma, and \epsilon denotes the ), or left-linear rules of the form A \to Ba, A \to a, or A \to \epsilon (where A, B \in V, a \in \Sigma). This restriction ensures that nonterminals appear at most once in each right-hand side, immediately adjacent to a terminal or at the end. Regular grammars differ fundamentally from the broader class of context-free grammars (Type-2 in the ), which permit productions A \to \alpha where \alpha can be any arbitrary string of terminals and nonterminals, allowing for nested or recursive structures beyond linear sequences. By limiting rules to these linear patterns, regular grammars generate precisely the regular languages, which can be recognized by finite automata and lack the expressive power for more complex dependencies found in context-free languages. This positions regular grammars as the most constrained yet computationally efficient type in the , foundational to .

Formalism

A regular grammar is formally defined as a four-tuple G = (V, \Sigma, R, S), where V is a of nonterminal symbols, \Sigma is a of terminal symbols, R is a finite set of production rules, and S \in V is the start symbol. The sets V and \Sigma are disjoint. The rules in R are constrained to ensure the generates only regular languages. grammars are divided into right-linear and left-linear forms, each with specific structures that limit the right-hand side to at most one nonterminal symbol, making all rules unary (one symbol on the right) or (two symbols, with exactly one nonterminal). In a right-linear , every is of the form A \to aB or A \to a, where A, B \in V, a \in \Sigma, or A \to \epsilon (the ). Symmetrically, in a left-linear , every is of the form A \to Ba or A \to a, or A \to \epsilon. Empty productions A \to \epsilon are permitted in regular grammars to allow generation of the empty string as part of the language. In some definitions, such productions are restricted to the start symbol S only, ensuring the grammar can produce the empty language or include \epsilon without introducing unnecessary nullable nonterminals elsewhere. This constraint maintains the grammar's regularity while simplifying equivalence proofs to finite automata.

Types

Right-linear grammars

A right-linear grammar is a in which every production rule is of the form A \to wB or A \to w, where A and B are nonterminal symbols, and w is a (possibly of symbols. In some definitions, w is restricted to a single symbol for stricter regularity, ensuring a direct correspondence to finite transitions, though the general form with arbitrary strings still generates only languages. In a right-linear grammar, derivations generate strings by expanding nonterminals from left to right, with each step appending terminals to the current sentential form while moving the nonterminal to the right end. This process mirrors the sequential path traversal in a (DFA), where each production A \to aB (with a a terminal) corresponds to consuming a symbol and transitioning states, and A \to \epsilon or A \to a marks acceptance. For example, the grammar with productions S \to aS \mid b generates the language consisting of zero or more 'a's followed by a 'b' over alphabet \{a, b\}, with derivations like S \Rightarrow aS \Rightarrow aaS \Rightarrow aab building the output incrementally from the left. Any regular grammar can be converted to an equivalent right-linear grammar without altering the generated language, typically by transforming left-linear rules into right-linear ones via intermediate finite automaton constructions. This normalization preserves the expressive power, as right-linear grammars generate exactly the . Right-linear grammars offer an advantage in conversion to nondeterministic finite automata (NFAs), as their right-recursive structure allows a straightforward : each nonterminal becomes a state, productions A \to aB yield transitions on a from A to B, and terminating rules A \to w define paths to accepting states. This direct correspondence simplifies automaton construction compared to more general forms.

Left-linear grammars

A left-linear grammar is a in which every production rule has the form A \to B w or A \to w, where A and B are nonterminal symbols from the grammar's variable set, and w is a (possibly consisting solely of symbols from the grammar's alphabet. This structure ensures that at most one nonterminal appears on the right-hand side of any production, and when present, it is positioned at the left end. Such grammars belong to the type-3 category in the , restricting the generative power to regular languages. The generation process in a left-linear grammar involves derivations where the nonterminal is expanded to the right of existing terminal strings, effectively constructing the output string from right to left during the derivation sequence. This right-to-left building order mirrors the behavior of a nondeterministic finite automaton processing input in reverse, allowing the grammar to simulate acceptance by reading symbols from the end of the string toward the beginning. For example, starting from the start symbol and applying rules iteratively prepends terminal strings to the growing derivation, until only terminals remain. Left-linear grammars are equivalent to right-linear grammars in expressive power, both generating precisely the class of regular languages. To demonstrate this, the productions of a left-linear grammar G can be reversed—replacing each rule A \to B w with A \to w' B (where w' is the reverse of w) and adjusting terminals accordingly—to yield a right-linear grammar G_{\rev} such that the language of G is the reverse of the language of G_{\rev}, i.e., L(G) = \{ w^R \mid w \in L(G_{\rev}) \}. The converse construction similarly interconverts right-linear to left-linear forms, preserving regularity. These grammars find application in scenarios requiring the description of reversed regular languages, such as when modeling dependencies that naturally align with backward scanning in automata, or in specifying nondeterministic branching at the onset of string formation. For instance, they facilitate the construction of for reversed inputs by directly adapting the grammar's structure to reversed transitions.

Strictly regular grammars

Strictly regular grammars represent a constrained variant of grammars, specifically designed for pedagogical and constructive simplicity in formal language theory. In a right-strictly regular grammar, all production rules take the form A \to a or A \to aB or A \to \epsilon, where A and B are nonterminal symbols, and a is a single terminal symbol from the alphabet \Sigma. Similarly, a left-strictly regular grammar uses rules of the form A \to a or A \to Ba or A \to \epsilon. These forms ensure that each rule appends or prepends exactly one terminal at a time, avoiding concatenations of multiple terminals in a single production. This restriction maintains equivalence in expressive power to more general right-linear and left-linear grammars, as both classes generate precisely the regular languages. However, strictly regular grammars facilitate a direct and straightforward conversion to nondeterministic finite automata (NFAs): each nonterminal corresponds to a , each A \to aB defines a from A to B on input a, and terminating s A \to a or A \to \epsilon indicate accepting states or transitions. This mapping simplifies proofs and implementations compared to handling arbitrary terminal strings in general linear rules. A key limitation of strictly regular grammars is their prohibition on rules with multiple consecutive terminals, such as A \to ab, which are permitted in extended regular grammars. Nonetheless, any extended regular grammar can be systematically converted to an equivalent strictly regular grammar by introducing auxiliary nonterminals; for instance, the rule A \to abC is replaced by new rules A \to aD and D \to bC, where D is a fresh nonterminal. This transformation preserves the generated language while adhering to the single-terminal constraint, though it may increase the number of nonterminals.

Equivalences

Relation to finite automata

Regular grammars generate exactly the regular languages, which are precisely those accepted by finite automata. Specifically, a language is regular if and only if it is generated by a regular grammar if and only if it is accepted by a finite automaton. This equivalence is constructive, allowing direct algorithmic translations between the two formalisms, as established in foundational works on automata theory. The conversion from a regular grammar to a nondeterministic finite automaton (NFA) maps the grammar's nonterminals to the automaton's states and its production rules to transitions. For a right-linear regular grammar G = (V_N, V_T, P, S), where V_N is the set of nonterminals, V_T the terminals, P the productions, and S the start symbol, construct an NFA M = (Q, \Sigma, \delta, q_0, F) as follows: set Q = V_N \cup \{f\} with a new accepting state f \notin V_N, \Sigma = V_T, q_0 = S, and F = \{f\}. For each production A \to aB (with A, B \in V_N, a \in V_T), add the transition \delta(A, a) = \{B\}; for each A \to a, add \delta(A, a) = \{f\}; for each A \to \epsilon, add \delta(A, \epsilon) = \{f\}. A similar but reversed construction applies to left-linear grammars, introducing a new start state and mapping rules A \to Ba to transitions \delta(B, a) = \{A\}, with A \to a mapping to transitions from the new start state, and A \to \epsilon handled by ε-transitions to the accepting state. This process ensures L(G) = L(M), without requiring elimination of nondeterminism. The inverse conversion from an NFA to a right-linear regular grammar uses the NFA's states as nonterminals. Given an NFA M = (Q, \Sigma, \delta, q_0, F), construct a grammar G = (V_N, V_T, P, S) with V_N = Q, V_T = \Sigma, S = q_0. For each transition \delta(p, a) = \{q\} (with p, q \in Q, a \in \Sigma), add the production p \to a q; for each final state f \in F, add f \to \epsilon. For left-linear grammars, reverse the rules to q \to p a and handle accepting states with ε-productions. These ε-productions handle acceptance without empty-string transitions in the NFA, preserving language equivalence; subset construction to a DFA is unnecessary for this mapping. The following outlines the -to-NFA conversion for a right-linear :
Algorithm GrammarToNFA(G = (V_N, V_T, P, S)):
    Q = V_N ∪ {f}  // f is new final state
    Σ = V_T
    q0 = S
    F = {f}
    δ = empty transition function
    for each production rule in P:
        if rule is A → a B (A, B ∈ V_N, a ∈ V_T):
            δ(A, a) = {B}
        if rule is A → a (A ∈ V_N, a ∈ V_T):
            δ(A, a) = {f}
        if rule is A → ε (A ∈ V_N):
            δ(A, ε) = {f}
    return NFA M = (Q, Σ, δ, q0, F)
This algorithm runs in time linear in the number of productions and directly yields an NFA recognizing the grammar's language.

Relation to regular expressions

Regular grammars generate precisely the class of regular languages, which are exactly those described by s, as formalized by Kleene's theorem establishing the equivalence between regular grammars, finite automata, and s. A standard approach to convert a regular grammar to a proceeds in two steps: first, construct an NFA from the grammar by treating non-terminals as states and productions as transitions (with terminal productions leading to accepting states), then derive the from the NFA via state elimination—progressively removing states and updating transitions with concatenated expressions—or by applying Arden's lemma to the system of equations modeling paths between states. Direct conversion methods avoid explicit automaton construction by formulating the grammar as a set of language equations, one per non-terminal, where each equation captures the strings derivable from that non-terminal based on the productions; these equations are then solved iteratively using Arden's rule to yield the for the start symbol. Arden's rule provides a key tool for solving such equations: if P and Q are regular expressions with \epsilon \notin L(Q), then the equation X = P \cup X Q has the unique solution X = P Q^*. For instance, given X = aX \cup b, the solution is X = a^* b. In practice, many regular expression libraries, such as those in programming languages, internally compile expressions to finite automata—equivalent to —for efficient and .

Examples

Right-linear grammar

A right-linear grammar generates strings by appending terminals to the left of nonterminals. Consider the of all non-empty strings consisting of one or more 0s followed by one or more 1s, denoted L = 0^+1^+. The following right-linear grammar G = (N, \Sigma, P, S) generates L, where N = \{S, A, B\}, \Sigma = \{0, 1\}, and P consists of:
  • S \to 0A
  • A \to 0A \mid 1B
  • B \to 1B \mid 1
For example, the "0011" is derived as: S \to 0A \to 00A \to 001B \to 0011B \to 0011.

Left-linear grammar

A left-linear generates strings by prepending terminals to the right of nonterminals. Consider the of all strings over \{a, b\} consisting of zero or more a's followed by zero or more b's, denoted L = a^*b^*. The following left-linear grammar G = (N, \Sigma, P, S) generates L, where N = \{S, T\}, \Sigma = \{a, b\}, and P consists of:
  • S \to Sa \mid T
  • T \to Tb \mid \epsilon
For example, the string "aabb" is derived as: S \to Sa \to Saa \to Saab \to Saabb \to aabb (noting the reverse derivation order for left-linear).

Mixed example: Binary strings ending with 01

For the language of all strings ending with "01", a right-linear grammar can be used: G = (N, \Sigma, P, S), where N = \{S, A\}, \Sigma = \{0, 1\}, and P consists of:
  • S \to 0S \mid 1S \mid 0A
  • A \to 1
This generates strings like "01", "101", "1001".

Properties

Expressive power

Regular grammars generate exactly the class of regular languages, which consist of all finite or infinite languages that can be recognized by finite automata and thus require only finite memory independent of input length. These languages capture patterns with bounded repetition or alternation, such as those describable by finite-state machines without unbounded counting or nesting. A key characterization of regular languages is provided by the pumping lemma, which establishes a necessary condition for regularity. For any regular language L, there exists a pumping length p > 0 such that every string w \in L with |w| \geq p can be divided as w = xyz where |xy| \leq p, |y| \geq 1, and xy^iz \in L for all integers i \geq 0. \text{If } L \text{ is regular, then } \exists p > 0 \text{ such that } \forall w \in L \ (|w| \geq p) \ \exists x,y,z \ (w=xyz, |xy| \leq p, |y| \geq 1, \ \forall i \geq 0 \ (xy^iz \in L)). This lemma implies that sufficiently long strings in regular languages contain a repeatable substring of bounded length without altering membership. The pumping lemma also proves limitations by showing certain languages are non-regular. For example, the language \{a^nb^n \mid n \geq 0\} is not regular. Assume it is, with pumping length k; then take w = a^kb^k \in L with |w| = 2k \geq k, so w = xyz satisfying the conditions. Since |xy| \leq k, x and y consist only of a's, making |y| \geq 1. Pumping with i=2 yields xy^2z = a^{k+|y|}b^k, which has more a's than b's and thus \notin L, a contradiction. All standard decision problems for regular languages are decidable. The membership problem—determining if a string w belongs to L—is solvable in linear time by simulating a on w. The emptiness problem (whether L contains any string) is decided by checking reachability of an accepting state from the start state in the automaton's . The finiteness problem (whether L is finite) is decided by detecting cycles in paths from the start state to accepting states or by testing membership for all strings up to length $2n-1, where n is the number of states. In the Chomsky hierarchy, regular grammars correspond to Type-3 grammars, which generate precisely the regular languages; every Type-3 grammar produces a regular language, and every regular language has an equivalent Type-3 grammar. This class is properly contained in Type-2 (context-free) languages, as there exist context-free languages, such as \{a^nb^n \mid n \geq 0\}, that are not regular.

Closure properties

Regular languages, the class generated by regular grammars, exhibit closure under several fundamental operations, meaning that applying these operations to regular languages yields another regular language. This property underscores the robustness of the regular language class within the . Proofs of closure typically rely on constructions involving finite automata equivalent to the grammars, such as nondeterministic finite automata (NFAs) for , , and , or deterministic finite automata (DFAs) for complement and . The union of two regular languages L_1 and L_2 is regular. This follows from constructing an NFA that simulates the NFAs for L_1 and L_2 via a , with from a new start state to their start states and their accepting states feeding into a new accepting state; the equivalent regular grammar can then be derived from this NFA. Concatenation preserves regularity: if L_1 and L_2 are , then L_1 L_2 = \{xy \mid x \in L_1, y \in L_2\} is . An NFA for the concatenation connects the accepting states of the NFA for L_1 to the start state of the NFA for L_2 via \epsilon-transitions, yielding an NFA for the result, from which a grammar is obtainable. Regular languages are closed under the operation: if L is regular, then L^* = \bigcup_{i=0}^\infty L^i (with L^0 = \{\epsilon\}) is regular. This is shown by modifying the NFA for L to include \epsilon-loops from its accepting states back to its start state, plus a direct path to accept \epsilon; the corresponding grammar follows from the NFA construction. Complement closure holds for regular languages: if L \subseteq \Sigma^* is regular, then its complement \overline{L} = \Sigma^* \setminus L is regular. Starting from a DFA for L, swap accepting and non-accepting states to obtain a DFA for \overline{L}, which can be converted to a grammar. The intersection of two regular languages L_1 and L_2 is regular. This is established by the product automaton construction: given DFAs for L_1 and L_2, form a DFA whose states are pairs of states from the originals, accepting pairs where both are accepting; the grammar is derived accordingly. De Morgan's law also implies intersection via union and complement, both of which preserve . Regular languages are closed under : if L is regular, then L^R = \{w^R \mid w \in L\} (where w^R reverses w) is regular. For grammars, the reversal of a right-linear grammar is left-linear (and ) by reversing production rules, preserving regularity since both forms generate regular languages; equivalently, reverse all transitions in the corresponding NFA and swap start and accept states. Finally, regular languages are closed under letter-to-letter and their . A homomorphism h: \Sigma^* \to \Gamma^* maps each letter to a word in \Gamma^*; if L \subseteq \Sigma^* is regular, then h(L) = \{h(w) \mid w \in L\} is regular, via NFA substitution of the images for transitions. For inverse homomorphism, h^{-1}(L') = \{w \in \Sigma^* \mid h(w) \in L' \subseteq \Gamma^*\} where L' is regular, the result is regular by tracking possible preimages in an NFA.

Mixing linear rules

In the of formal languages, mixed linear grammars for languages incorporate production rules that are both left-linear (of the form A \to B \alpha or A \to \alpha, where B is a nonterminal and \alpha is a string of terminals) and right-linear (of the form A \to \alpha B or A \to \alpha) within the same grammar. While unrestricted mixing can produce linear grammars that generate non-regular context-free languages (such as \{ a^n b^n \mid n \geq 0 \}), structured mixtures that preserve are possible, notably through the framework of strongly regular grammars. In these, rules within each set of mutually recursive nonterminals—defined by a dependency relation where nonterminals can derive each other through intermediate strings—are uniformly left-linear or right-linear, allowing controlled mixing across disjoint components. The expressive power of such mixed regular grammars remains confined to the regular languages, equivalent to those generated by pure left- or right-linear forms. However, for uniformity in processing or proofs, to a pure right-linear (or left-linear) form is often applied, ensuring all rules follow one orientation without altering the language. Normalization algorithms convert mixed regular grammars to right-linear equivalents via an intermediate nondeterministic finite automaton (NFA). Nonterminals become NFA states, with right-linear rules A \to \alpha B inducing transitions from state A to B on \alpha, and terminal rules A \to \alpha marking accepting states; for left-linear rules A \to B \alpha, the construction reverses the transitions (effectively processing the reversed language) before converting back. This introduces intermediate states corresponding to the NFA's structure, yielding a pure right-linear grammar. Mixing linear rules can increase a grammar's , as a single string may admit multiple leftmost derivations, complicating despite the language remaining regular and unchanged. For example, consider the following mixed grammar generating the regular L = \{ ab, c \}:
  • S \to A b (left-linear)
  • A \to a (terminal production)
  • S \to c (terminal production)
Deriving "ab" uses the left-linear rule followed by the terminal, yielding L(G) = \{ [ab](/page/AB), c \}, which is equivalent to the pure right-linear grammar S \to ab \mid c. In this case, ambiguity is minimal, but more intricate mixtures can produce multiple derivations for the same .

References

  1. [1]
    [PDF] The Chomsky Hierarchy
    A regular grammar is one where every produc- tion has the form A → bC or A → a. The Chom- sky hierarchy also includes context-sensitive grammars and ...
  2. [2]
    A Formal Definition of Regular Grammars
    The purpose of a regular grammar is to specify how to form grammatically correct strings in the language the grammar represents. All strings in Σ* that can be ...
  3. [3]
    Regular Grammars: A First Look
    Every regular language can be specified by a finite state automaton, a regular expression, or a regular grammar. Furthermore, all of these specifications are ...
  4. [4]
    Three models for the description of language - IEEE Xplore
    Three models for the description of language. Abstract: We investigate several conceptions of linguistic structure to determine whether or not they can provide ...
  5. [5]
    [PDF] Grammars - GMU CS Department
    Regular Grammars. Regular grammars are the same, but with a restriction on the production rules. 2 types of rule are allowed: A → Λ or A → bC, where A, C ...
  6. [6]
    4.4. Regular Grammars — CSC303: Theory of Computation
    Regular grammars are another way to describe regular languages. Recall that a grammar is made of of terminals, variables, and production rule.
  7. [7]
    [PDF] 1 Chomsky Hierarchy
    Proposition 1. If G is Type 3 grammar then L(G) is regular. Conversely, if L is regular then there is a Type 3 grammar G such that L = L(G). −→M qF .
  8. [8]
    [PDF] Ch 13.1: Languages and Grammars - University of Hawaii System
    G = (V,Σ,P,S), denoted L(G), is the set of all ... regular grammar if and only if every rule in ... A language generated by a regular grammar is called a.
  9. [9]
    Formal Language Definitions - UMBC CSEE
    Jan 24, 2004 · Regular Grammar. A grammar is defined as G = (V, T, P, S) where: V is a set of symbols called variables, v1, v2, ... ,vn T is a set of ...
  10. [10]
    [PDF] Right Linear Grammars - Penn Linguistics
    Oct 8, 2003 · Formal definition of Right Linear Grammars. A right linear grammar is a 4-tuple <T, N, S, R>, where: 1. N is a finite set of non-terminals. 2 ...
  11. [11]
    [PDF] Grammars - Web - UNLV Computer Science
    Chomsky defines five classes of grammars. 1. Left-regular grammars. This class generates the class of regular languages. 2. Right-regular grammars. This class ...
  12. [12]
    [PDF] Formal Languages
    Suppose that a regular grammar has the production rules: P = {S -> aA, S -> a, A -> bA, A -> b}. Define the automata. S. A. FF a a b b. A node for each non ...
  13. [13]
    [PDF] Introduction to the Theory of Formal Languages
    A grammar (N, T, S, P) is right-linear iff all productions are of the form: A → w or A → wB with A, B ∈ N and w ∈ T∗. A language generated by a right-linear ...
  14. [14]
    [PDF] Formal Grammars and Languages
    The term formal languages refers to languages that can be described by a body of systematic rules. 2.1 Regular Expressions and Languages. Let Σ be an alphabet.<|control11|><|separator|>
  15. [15]
    [PDF] Chapter 6 Formal Language Theory
    Formally, we define a grammar as {VT ,VN , S, R}, where VT is the set of terminal elements, VN is the set of non-terminals, S is a member of VN , and. R is a ...
  16. [16]
    [PDF] Chapter 2 Formal language theory
    A regular language is a restricted kind of formal language, yet given any finite alphabet there are an infinite number of possible regular languages and regular ...
  17. [17]
    How to convert left linear grammar to right linear grammar?
    Jun 14, 2021 · Step 1 − We will convert the given left linear grammar to finite automata. Step 2 − We will now interchange the initial and final state.
  18. [18]
    Regular Grammar - NFA Equivalence - Dr. Swaminathan J
    There is a definite advantage with this way of specifying the grammar. This is called as right-linear grammar. It can be easily turned into an NFA. In a right- ...
  19. [19]
    [PDF] Introduction to Languages and Grammars
    The definition of linear grammar is a restriction on the definition of context free. • The definitions of left linear and right linear are restrictions on the ...
  20. [20]
    [PDF] Formal Languages, Automata and Computation - andrew.cmu.ed
    If all productions of a grammar are like A → Bb or. A → b where b is a terminal and B is a variable, then it is called a left-linear grammar. Right-linear ...
  21. [21]
    [PDF] Theory of Computation Class Notes1
    A language which can be generated by a regular grammar will (later) be shown to be regular. ... Definition 2.5.1 A context-free grammar G = (V, Σ, P, S) is ...
  22. [22]
    [PDF] 13 Context-free Languages
    Converting a left-linear grammar to an NFA is less straightforward. We first reverse the language it represents by reversing the grammar. Grammar reversal ...<|control11|><|separator|>
  23. [23]
    [PDF] Hardware Acceleration of Network Intrusion Detection and Prevention
    ... single terminal possibly followed by a single non-terminal in the case of a ... regular grammar can be also expressed as an equivalent strictly regular grammar.
  24. [24]
  25. [25]
    [PDF] Introduction to Automata Theory, Languages, and Computation
    Hopcroft, John E., 1939-. Introduction to automata theory, languages, and computation / John E. Hopcroft, Rajeev Motwani, Jeffrey D. Ullman.-2nd ed. p. cm ...
  26. [26]
    [PDF] Regular Languages and Finite Automata
    Motwani and J. D. Ullman, Introduction to Automata Theory,. Languages, and ... Definition A context-free grammar is regular iff all its productions are of.
  27. [27]
    [PDF] NFA DFA regex • Arden's lemma helps us - School of Informatics
    Arden's Lemma. 14. If R and S are regular expressions then the equation. X = R | X S has a solution X = R S*. If ε ∉ L(S) then this solution is unique. L2 = b ...
  28. [28]
    Lecture 1: Introduction, Finite Automata, Regular Expressions
    Described finite automata, their formal definition, regular languages, regular operations, and regular expressions.
  29. [29]
    Regular Languages and Finite Automata
    A finite automaton can be considered as the simplest machine model in that the machine has a finite memory; that is, the memory size is independent of the input ...
  30. [30]
    [PDF] Some Notes on Regular Grammars
    Feb 7, 2000 · The dual left-linear grammars also generate the same languages. A grammar is regular if it is either right-linear or left-linear.
  31. [31]
    [PDF] The Pumping Lemma for Regular Languages
    Pumping lemma states that all regular languages have a special property. Pumping lemma does not state that only regular languages have this property.
  32. [32]
    [PDF] Chapter Eleven: Non-Regular Languages
    1. Proof is by contradiction using the pumping lemma for regular languages. Assume that L = {anbn} is regular, so the pumping lemma holds for L. Let k be as ...
  33. [33]
    [PDF] Decision Properties of Regular Languages - Stanford InfoLab
    Example: “Does the protocol terminate?” = “Is the language finite? ... The Emptiness Problem. ◇Given a regular language, does the language contain ...
  34. [34]
    [PDF] Closure Properties of Regular Languages - Stanford InfoLab
    Closure Properties of Regular. Languages. Union, Intersection, Difference,. Concatenation, Kleene Closure,. Reversal, Homomorphism, Inverse. Homomorphism. Page ...
  35. [35]
    [PDF] 1 Closure Properties
    Regular Languages are closed under complementation, i.e., if L is regular then. L = Σ∗ \ L is also regular. Proof. • If L is regular, then there is a DFA M = (Q ...
  36. [36]
    Use of NFA's for Closure Properties of Regular Languages
    NFAs are useful to show regular languages are closed under the last three operations (union, concatenation, star). Union. An NFA to recognize L1 ∪ ...
  37. [37]
    [PDF] Closure Properties of Regular Languages
    Fact. The set of regular languages is closed under intersection. and that regular languages are closed under union and complementation. Each state in the ...
  38. [38]
    [PDF] Closure Properties of Regular Languages Let L and M be regular ...
    If L and M are regular, then so is L ∩ M. Proof. By DeMorgan's law L ∩ M = L ∪ M. We already that regular languages are closed under complement and union ...<|control11|><|separator|>
  39. [39]
    Properties of Regular Languages
    Feb 14, 2025 · 2.2 Other Closure Properties · 2.2.1 The Regular Languages are Closed under Reversal · 2.2.2 The Regular Languages are Closed under Homomorphism.
  40. [40]
    [PDF] Closure Properties for Regular Languages - Computer Science
    – We define inverse-homomorphism of a language L ∈ Γ∗ as h−1(L) = {w ∈ Σ∗ : h(w) ∈ L ⊆ Γ∗}. Theorem. The class of regular languages is closed under homomorphism ...
  41. [41]
    Right and Left linear Regular Grammars - GeeksforGeeks
    Jul 23, 2025 · In this type of regular grammar, all the non-terminals on the right-hand side exist at the rightmost place, or at the right ends.
  42. [42]
    [PDF] Right- and Left-Linear Grammars
    A grammar is said to be left-linear if all productions are of the form. A→Bx, or. A → x. A regular grammar is one that is either right-linear or left-linear.
  43. [43]
    [PDF] Strongly Regular Grammars and Regular Approximation of Context ...
    If all productions are of the form A → Bw or A → w then G is a left–linear grammar. G is a regular grammar if it is either right–linear or left–linear.
  44. [44]
    Strongly Regular Grammars and Regular Approximation of Context ...
    We consider algorithms for approximating context–free grammars by regular grammars, making use of Chomsky's characterization of non–self–embedding grammars.
  45. [45]
    Regular Grammar (Model Regular Grammars) - GeeksforGeeks
    Jul 23, 2025 · Regular grammar is a formal grammar used to describe regular languages, which are the languages that can be recognized by finite automata.
  46. [46]
    Convert Regular Grammar to Finite Automata - Tutorials Point
    Converting a regular grammar with epsilon productions into a finite automata is a two-step process. First, we eliminate epsilon productions.
  47. [47]
    Convert Right-Linear Grammar to FA - JFLAP
    Using JFLAP, we will show how to convert a right-linear grammar to a finite automaton.<|control11|><|separator|>
  48. [48]
    Ambiguous Regular Grammar? - Stack Overflow
    Apr 1, 2011 · A regular grammar may or may not be ambiguous. G : A -> bA|b|bb. Here G is Regular but Ambiguous. Share.Missing: mixing | Show results with:mixing
  49. [49]
    Ambiguity in regular and context-free languages
    Sep 20, 2013 · Ambiguity in context-free languages occurs when different derivations yield the same string. Regular grammars can be ambiguous, but can be made ...