Entry point
In computer programming, an entry point is the specific location in a program's code or a library where execution formally begins when the program or module is invoked by the runtime environment. This point typically consists of a designated function or code block that initializes the application, processes command-line arguments, and orchestrates subsequent operations, ensuring proper setup of resources like the runtime library and static constructors.[1][2] The form of an entry point varies across programming languages and environments, reflecting their design philosophies and runtime models. In languages like C and C++, the entry point is usually themain function (or wmain for wide-character support), which the linker automatically calls after runtime initialization; for Windows GUI applications, it may instead be WinMain or wWinMain.[2][3] In Java, the entry point is a class containing a public static void main(String[] args) method, which the Java Virtual Machine (JVM) invokes to start the application.[4][5] For dynamically linked libraries (DLLs) in Windows, the entry point is often DllMain, called by the operating system during loading or unloading to perform initialization and cleanup tasks.[6][7]
Beyond executables, entry points also play a role in modular and extensible systems. In Python, while scripts execute top-level code as the de facto entry point when run directly (often guarded by if __name__ == "__main__":), the language's packaging system uses entry points as a mechanism for distributions to advertise discoverable components, such as console scripts or plugins, enabling dynamic integration without hard-coded dependencies.[8][9] This concept underscores the entry point's importance in ensuring portability, modularity, and reliable program startup across diverse computing platforms.
Overview
Definition and Purpose
In computer programming, an entry point refers to the specific function, subroutine, or instruction that serves as the initial location for a program's execution once it has been loaded into memory by the operating system or runtime environment.[10] This designated starting address ensures that the processor begins processing the program's code from a predefined point rather than an arbitrary location.[11] The primary purpose of an entry point is to facilitate the orderly initialization of essential program components, including runtime libraries, global variables, stacks, and heaps, before transferring control to the main application logic.[12] By enforcing a controlled commencement, it prevents undefined behavior that could arise from executing uninitialized code segments or jumping to incorrect addresses, thereby establishing a reliable control flow for the entire program lifecycle.[13] Key characteristics of an entry point include its unique identifiability by the linker or runtime system, often achieved through standardized naming conventions (such as "main" in many high-level languages) or explicit attributes specified during compilation and linking processes.[10] In practice, entry points can manifest as main functions in high-level languages, where they handle command-line arguments and orchestrate high-level operations, or as reset vectors in low-level assembly code, which point to the processor's initial instruction upon power-on or reset to initiate hardware and software setup.[14]Role in Execution Flow
The loader or runtime environment initiates program execution by transferring control to the entry point after loading the executable into memory, allocating space for code, data, and stack, and resolving external symbols and references.[15] This transfer sets the program counter to the entry point's address, marking the transition from operating system management to user-defined code.[16] From the entry point, execution proceeds through a structured control flow that begins with runtime initialization routines, which configure essential components such as the stack for function calls and local variables, the heap for dynamic memory allocation, and global or static variables by initializing them to default values or invoking constructors in languages like C++.[12] These routines, often implemented in startup code like_start in C environments, also process operating system-provided arguments, such as command-line parameters passed via argc and argv in the main function, enabling the program to adapt to invocation context.[17] Once initialization completes, control branches to the core application logic, directing the sequence of operations that define the program's behavior.[12]
Error handling at the entry point focuses on early failure detection during initialization, with mechanisms like return codes from the entry function signaling issues such as resource shortages or invalid configurations to the operating system, which interprets non-zero values as abnormal termination.[17] If initialization fails prior to reaching user code, the loader may report errors like unresolved symbols or memory allocation issues, preventing execution altogether.[15]
In compiled environments, the entry point bridges operating system calls—such as process creation and signal handling—with user code by invoking platform-specific runtime libraries that abstract hardware details.[12] In contrast, interpreted environments rely on the language runtime, like a Python interpreter, to load source code, compile it on-the-fly to bytecode, and start execution from the script's top-level statements or a designated main block, integrating OS interactions through the interpreter's event loop or module system without a fixed binary entry symbol.
Historical Development
Early Concepts in Computing
The concept of an entry point in computing originated in the 1940s with vacuum-tube based machines like the ENIAC, where execution began at a fixed starting location determined by hardware configuration rather than stored instructions. In the ENIAC, completed in 1945, programs were set up through patch cords and switches that defined the sequence of operations across its 40 panels, including 20 accumulators functioning as both computational units and temporary storage. The entry point was effectively the initial "line" of the program, initiated by stimulating the master programmer unit, which sequenced orders unless altered by jump instructions transferring control to the designated start of a new sequence; these jumps were specified via switches on the function units, ensuring execution commenced from a predefined hardware address without modifiable program storage.[18] Punch-card and punched-tape systems in the late 1940s and 1950s further exemplified implicit entry points, as programs lacked a formal designation like a "main" function and instead started sequentially from the physical beginning of the input medium. In punch-card setups, such as those used with early IBM tabulating machines adapted for computing, the program deck began reading from column 1 of the first card, with conventions limiting source code to columns 1-72 to align with the 80-column format; the loader would interpret this initial card as the program's origin, loading instructions into memory starting from a fixed location without explicit address specification. Similarly, punched-paper tape systems, common in machines like the Harvard Mark I (operational from 1944), commenced execution from the tape leader or first punched block, where the absence of headers meant the entry point was inherently the onset of readable data, relying on the reader's hardware to initiate sequential processing.[19][20] At the assembly level, early 1950s assemblers introduced directives to explicitly set starting addresses, marking a transition toward more structured program organization. For instance, the ORG (origin) directive in assemblers for machines like the IBM 701 from 1952 allowed programmers to specify the memory location for the first instruction, such as ORG 1000 to begin assembly at address 1000; this facilitated absolute addressing in symbolic code translated to machine instructions, preventing overlaps and aligning with hardware memory maps in vacuum-tube systems.[21] A key milestone came with stored-program computers in the 1950s, exemplified by the EDSAC, operational in 1949, where entry points were managed via loader-specified jumps following program input. David Wheeler's initial orders, a 32-word bootstrap routine hand-set into memory using uniselectors, read the user's paper tape program into consecutive memory locations starting at word 33; upon completion, it executed an unconditional jump to the entry point at the loaded program's start, establishing the first practical mechanism for dynamic program initiation in a stored-program architecture. This approach, detailed in the 1951 book by Wilkes, Wheeler, and Gill, enabled reliable transfer of control without manual reconfiguration, influencing subsequent systems.[22]Evolution Through Decades
In the 1960s, entry points emerged as a key abstraction in time-sharing operating systems like Multics, where they were implemented as named symbols within segments that could be linked by loaders to facilitate modular program execution.[23] Multics, developed starting in 1965 by MIT, Bell Labs, and General Electric, allowed segments—units of code or data—to have multiple entry points, enabling alternate starting locations for execution and supporting dynamic linking during loading.[24] This approach marked a shift from rigid, hardware-bound starts to software-defined entry symbols, improving program modularity in multi-user environments.[25] During the 1970s and 1980s, entry points standardized in Unix-like systems, with the C runtime using _start as the initial entry symbol before invoking main(), which received argc and argv parameters for command-line arguments.[26] Unix, originating in the early 1970s at Bell Labs, established this model where the loader passed argument counts and vectors via the stack, influencing portable program design across systems.[27] The rise of structured programming in this era, championed by Edsger Dijkstra's 1972 work, emphasized single-entry/single-exit modules, reinforcing clear entry points to reduce complexity in procedural code.[28] A pivotal event was the 1989 ANSI C standard (X3.159-1989), which formalized main() as the program's primary entry function with signatures like int main(int argc, char *argv[]), ensuring consistent behavior across implementations.[29] This standardization built on earlier high-level languages like Fortran, which introduced the MAIN program block in 1957 as the designated entry point for execution.[30] In the 1990s, the advent of graphical user interfaces and modular architectures further evolved entry points, notably in the Windows Portable Executable (PE) format introduced with Windows NT 3.1 in 1993, where the entry point was specified as a Relative Virtual Address (RVA) in the optional header for efficient loading.[31] This RVA-based design supported dynamic relocation and dependency resolution, impacting executable portability on x86 systems.[32] Concurrently, early virtualization efforts, such as IBM's OS/2 Virtual DOS Machine (MVDM) introduced in OS/2 2.0 pre-release in 1991 and VMware's x86 hypervisor prototype in 1998, began abstracting entry points within virtual machines, allowing isolated execution starts without direct hardware access.[33][34] These developments accommodated GUI-driven launches and modular DLLs, prioritizing flexible entry handling in distributed environments. By the early 2000s, just-in-time (JIT) compilers and dynamic loading refined entry point management, as seen in Java's public static void main(String[] args) method, standardized since JDK 1.0 in 1996 but widely adopted with HotSpot's JIT optimizations around 2000 for better performance.[26] Java's model addressed multi-threading by initializing the main thread at this static entry before spawning others, integrating with class loaders for runtime code injection.[35] The PE format's influence persisted, standardizing entry RVAs across Windows applications and enabling seamless integration with JIT-generated code in managed environments.[31]Usage Patterns
In Batch and Standalone Programs
In batch processing, the entry point is invoked by system schedulers, such as cron, which execute programs as independent processes at predetermined times without user intervention, prioritizing minimal input/output to enable quick termination and efficient handling of queued tasks. This approach stems from early batch computing practices, where programs run in isolation to maximize throughput on shared resources.[36] The scheduler typically launches the executable via shell commands or direct system calls, passing control to the program's designated starting function upon loading.[37] For standalone executables, the operating system directly invokes the entry point when the program is run from the command line or a similar non-interactive context, performing essential setup like memory allocation and environment initialization before proceeding to the core logic, all without requiring user prompts.[37] This invocation replaces the calling process image with the new one, ensuring the entry point receives full control immediately after loading the executable file.[37] Argument passing to the entry point in these scenarios relies on standardized mechanisms defined by POSIX, where the main function receives an argument count (argc), an array of argument strings (argv) terminated by a null pointer, and optionally an environment array (envp) for configuration details suited to isolated executions.[37] The argv element conventionally holds the program name or filename, while envp provides access to system variables like PATH or custom settings, facilitating adaptability in batch or direct runs without interactive input.[37] Representative examples include scientific simulations and data processing tools, where the entry point initializes predefined workflows—such as parsing command-line parameters for input files, allocating resources for computations, and directing output to files or logs—before executing the task and exiting to free system resources promptly. In high-performance computing environments, this structure supports batch runs of simulations that process large datasets in fixed sequences, emphasizing reliability and minimal overhead.In Interactive and Modern Environments
In interactive environments, entry points facilitate dynamic, user-driven execution rather than linear flows, often through loops or event handlers that respond to inputs or triggers. The Python Read-Eval-Print Loop (REPL), for example, acts as the primary entry point for interactive programming, invoked by running thepython command, which launches an interpreter loop displaying a >>> prompt for immediate code evaluation and execution until termination.[38] Similarly, in web browsers, the load event on the Window object provides a critical entry point for JavaScript, firing after the full page—including stylesheets, images, and subframes—has loaded, enabling initialization of interactive features like dynamic content updates.[39]
Modern operating systems and container platforms integrate entry points to manage responsive application lifecycles. In Android, the onCreate() method serves as the foundational entry point for activities, called once during startup to handle essential tasks such as binding data to views, initializing variables, and associating with components like ViewModels, before transitioning to visible states.[40] For container orchestration, Docker's ENTRYPOINT directive defines the default executable or script executed upon container launch, allowing customization of startup behaviors—such as environment setup or argument handling—while supporting overrides via runtime commands for flexibility in deployment.[41] In distributed serverless paradigms, AWS Lambda's handler function constitutes the event-driven entry point, receiving invocation events and context data to process requests, such as uploading files to storage services, with the handler specified as filename.function_name for precise invocation control.[42]
Security in these environments prioritizes sandboxing at entry to constrain privileges and mitigate risks from untrusted inputs. Browsers inherently sandbox JavaScript execution within isolated contexts, preventing scripts from accessing the host filesystem or other processes, thus limiting potential exploits during onload initialization.[43] In Docker, entrypoint scripts should run as non-root users via USER directives and security options like --security-opt=no-new-privileges to block escalation, ensuring containerized processes remain isolated.[44] For Lambda handlers in multi-tenant setups, 2020s trends emphasize tenant isolation through authorization checks at invocation—such as validating access tokens—and encryption of event data to prevent cross-tenant leaks in shared environments.[45]
The 2010s marked key advancements in microservices, where configurable entry points via declarative manifests became central to scalability. API gateways emerged as unified entry points, routing requests to backend services while enforcing security and aggregating responses to minimize client latency.[46] In Kubernetes, pod specifications in YAML manifests allow overriding Docker entrypoints with custom command and args fields, enabling tailored startup logic across distributed services without altering images.[47] These patterns, building on early microservices concepts from 2011-2014, supported decentralized architectures with polyglot services and lightweight protocols.[48]
Relation to Exit Point
Defining the Exit Point
The exit point in a computer program refers to the designated code path, instruction, or function call that initiates the termination of the program's execution, ensuring that resources are properly released and control is returned to the operating system or calling environment.[49] This mechanism contrasts with the entry point, which marks the beginning of program execution.[50] Exit points can be categorized into several types based on the nature of termination. Normal exits occur when the program completes successfully or intentionally, typically via a function likeexit() that returns a status code such as 0 or EXIT_SUCCESS to indicate success.[49] Abnormal exits happen due to errors or external interventions, often triggered by signals like SIGTERM, which force termination without standard cleanup.[50]
At the core of an exit point's operation are mechanics designed to maintain system integrity. These include flushing output buffers to prevent data loss, freeing dynamically allocated memory to avoid leaks, closing open file descriptors, and propagating an exit status code—usually an 8-bit integer—to the parent process for evaluation via functions like wait().[49] In normal termination, registered cleanup functions (e.g., via atexit()) are executed in reverse order, and temporary files may be removed automatically.[49]
Integration in Program Lifecycle
The program lifecycle encompasses three primary phases: initialization at the entry point, execution of core logic, and finalization at the exit point, forming a symmetric structure that promotes orderly resource handling and state management from startup to shutdown. This structure is invoked by the operating system, which calls the designated entry point—such as themain function in C—to begin execution, and concludes when the program returns control via the exit point, ensuring cleanup and status reporting to the runtime environment.
Interactions between entry and exit points are facilitated through mechanisms that bridge initialization and termination; for instance, in C, the atexit function, typically registered during entry point execution, queues handler functions to execute automatically upon normal program exit via exit or return from main, enabling deferred cleanup without manual invocation at every termination path. Signals raised during core execution can lead to abnormal termination, potentially bypassing standard cleanup unless handled by signal handlers. As outlined in the definition of the exit point, this ensures that abrupt ends align with the program's overall flow.[51]
Resource management integrates entry and exit points by pairing allocation during initialization with deallocation at termination; for example, dynamic memory or file handles acquired in the entry phase must be released via exit handlers to prevent leaks, as mismatches can lead to resource exhaustion in long-running or repeated executions. In C, atexit supports this by allowing registration of deallocation routines at startup, which execute in reverse order of registration during exit, maintaining balance across the lifecycle.[51]
In advanced configurations, such as plugin systems or multithreaded environments, programs feature multiple entry points—like DLL entry via DllMain in Windows for process attachment—and corresponding exit points for detachment, necessitating synchronization primitives to coordinate lifecycle events and avoid conflicts during concurrent loading or unloading. For multithreaded programs, thread-specific entry points (e.g., functions passed to pthread_create in POSIX) and exits (via pthread_exit) must synchronize with the main program's entry and exit to ensure collective resource release, often using mutexes or barriers to prevent premature termination or orphaned allocations.[52]
Implementations in Programming Languages
Procedural and Imperative Languages
In procedural and imperative languages, the entry point serves as the designated starting location for program execution, typically initiated by the operating system after loading the executable and initializing global variables. This entry point often involves a runtime startup routine that sets up the environment before transferring control to the user-defined main function or equivalent. Common across these languages is the handling of global initialization, where static and global variables are allocated and initialized prior to invoking the entry point, ensuring a consistent state for imperative control flow. In C and C++, the operating system entry point is commonly the_start symbol provided by the runtime library, such as in GNU's libc, which performs initial setup like stack allocation and argument passing before calling the user's main function. The main function signature is standardized as int main(int argc, char *argv[]) or int main(int argc, char *argv[], char *envp[]) in C++, where argc counts command-line arguments and argv is an array of argument strings; it returns an integer exit code to indicate program status to the OS. Explicit linking can override the default entry point using tools like the GNU linker (ld) with the --entry option to specify a custom symbol, allowing fine-grained control in embedded or low-level applications.
FORTRAN, a foundational imperative language, defines the entry point through the PROGRAM statement, which marks the beginning of the main program unit and implies sequential execution from that line in fixed-form source code, where columns 1-5 are reserved for labels and column 6 for continuation indicators. The ANSI FORTRAN 77 standard formalized this structure, requiring the PROGRAM statement (optionally followed by a program name) as the entry for the primary executable unit, with execution proceeding imperatively through statements until an END or STOP directive. This design reflects FORTRAN's origins in scientific computing, prioritizing straightforward control flow without runtime argument handling in early versions.
Pascal employs a program block structure for its entry point, beginning with a program declaration (optionally named) followed by a begin keyword to initiate the main executable statements, enclosed by an end to signal completion. This block-oriented approach enforces imperative sequencing, with global initialization occurring prior to entry as part of the compilation unit setup. Compiler directives, such as those in ISO Pascal or extensions in implementations like Free Pascal, allow configuration of the entry point for modular programs or runtime environments, enabling customization for different hardware targets.
Functional and Declarative Languages
In functional and declarative languages, entry points emphasize declarative specifications and immutability, where program execution begins through top-level forms or designated functions that encapsulate side effects within controlled structures like monads or uniqueness types, contrasting with imperative mutation.[53] Haskell programs initiate execution from themain function in the Main module, which must have the type IO () to perform input/output actions while preserving referential transparency outside the IO monad.[54] The Glasgow Haskell Compiler (GHC) runtime system initializes the program by invoking this main after setting up the runtime environment, ensuring lazy evaluation and garbage collection are active from the start. The IO monad, introduced in the early 1990s as a composable mechanism for side effects in pure functional languages, wraps imperative-style operations at the entry point to maintain purity elsewhere.[53]
In Common Lisp, there is no fixed entry point like a mandatory main; instead, execution often starts by evaluating top-level forms upon loading the primary source file, or by specifying a toplevel function such as (defun main () ...) for standalone executables.[55] Load-time evaluation, controlled via the eval-when special form with phases like :load-toplevel and :execute, ensures that initialization code—such as loading dependencies—runs when the image is dumped or the program starts, supporting dynamic environments typical of Lisp variants.[55]
OCaml, blending functional and imperative paradigms, uses a top-level expression like let () = main () as the entry point, where main is a unit-returning function that triggers the program's primary logic without binding a value to a name.[56] In bytecode compilation with ocamlc, the runtime ocamlrun interprets the code sequentially from the loaded modules, while native compilation via ocamlopt produces standalone executables where entry points are determined by the linking order of .cmx files, raising errors if dependencies are unresolved at link time.[57]
Clean, a purely functional language with lazy evaluation by default, requires a Start rule defined in definition modules (.dcl files) as the explicit entry point, which computes the program's world state transitions while delaying evaluation until results are needed.[58] This approach, leveraging uniqueness types for safe side effects, postpones entry point effects until demanded, aligning with 1990s advancements in type-safe I/O for declarative languages.[58][53]
Object-Oriented and Scripting Languages
In object-oriented languages like Java and C#, the entry point is typically a designated static method within a class that the runtime environment invokes to begin program execution. In Java, the Java Virtual Machine (JVM) starts execution by calling thepublic static void main(String[] args) method in the specified class, which serves as the bootstrap entry for standalone applications. This method must reside in a public class and accepts command-line arguments via the args parameter, enabling the program to process inputs immediately upon launch. The JVM identifies this entry point through the JAR manifest's Main-Class header or direct class specification, ensuring encapsulated object initialization occurs from this static context before instance creation.[5]
Similarly, in C# running on the .NET Common Language Runtime (CLR), the entry point is the static void Main([string](/page/String)[] args) method, which the runtime invokes first in console or executable applications. This method initializes the application's object-oriented structure, including garbage collection setup and static constructors, allowing developers to orchestrate class loading and resource allocation from a central point. Since C# 7.1, this entry point supports asynchronous execution with static async Task Main([string](/page/String)[] args), facilitating non-blocking I/O operations at startup, such as database connections or API calls, without altering the core lifecycle. The CLR handles garbage collection initialization concurrently with this entry, optimizing memory management in object-heavy environments.[59][60]
Scripting languages, often dynamically typed and interpreted, emphasize flexibility in entry points to support rapid development and runtime evaluation. In Python, scripts execute from the top-level code, but the conventional entry point is guarded by the idiom if __name__ == "__main__":, which runs a main() function or block only when the module is invoked directly rather than imported. This approach leverages the interpreter's execution model, where the __main__ module namespace is set upon direct run (e.g., python script.py), enabling reusable code with isolated script behavior and supporting garbage collection from the outset. Python's interpreter initializes its runtime environment, including the global namespace, at this entry, accommodating dynamic object creation without strict class requirements.[8]
PHP scripts, commonly used for web and command-line tasks, begin execution from the first line after the opening <?php tag, serving as the implicit entry point without a formal method declaration. In CLI mode, invoked via php script.php, the interpreter processes the entire file sequentially, initializing the superglobals $argc and $argv from command-line arguments and setting up garbage collection for dynamic variables; this differs from web server invocation, where the entry aligns with HTTP request handling but still starts at the script's top. The CLI binary, distinct from CGI, ensures standalone execution with shebang support (e.g., #!/usr/bin/php) for direct invocation, emphasizing PHP's runtime flexibility over rigid object encapsulation.[61]
Ruby and Perl, as interpreted scripting languages, lack a mandatory entry method, with execution commencing at the file's top line upon invocation (e.g., ruby script.rb or perl script.pl), allowing immediate dynamic code evaluation and object instantiation. In Ruby, developers often use if $0 == __FILE__ to demarcate script-specific code, mirroring Python's guard while the interpreter handles garbage collection and module loading from this starting point. Perl follows suit, processing the script line-by-line after optional shebang, with the runtime initializing its symbol table and memory management at entry for procedural or object-oriented extensions. These designs prioritize scripting efficiency, enabling quick prototyping with runtime garbage collection tailored to dynamic typing.[62][63]
In the 2010s, Node.js's event-driven asynchronous model influenced scripting entry points, promoting top-level async constructs like async function main() in JavaScript modules for non-blocking starts, which inspired similar updates in languages such as Python's asyncio integration within __main__ blocks and C#'s async Main for containerized scripts. This evolution supports modern, container-friendly deployments where entry points handle asynchronous initialization efficiently, reducing latency in I/O-bound applications.[64]
Other Notable Languages
In APL, traditional defined functions (tradfns) are delimited by the Del symbol (∇), which marks the beginning and end of the function definition, allowing for structured programming with headers specifying arguments and results. The APL interpreter typically executes scripts starting from line 0 in numbered line mode, serving as the implicit entry point for sequential evaluation.[65][66] The Go programming language designates the entry point as thefunc main() within a package main, which the compiler automatically identifies and links as the program's starting function. Prior to main, any init() functions in packages execute sequentially, and these may launch goroutines for concurrent initialization, ensuring the runtime environment is prepared before the primary entry.[67]
Rust uses fn main() as the conventional entry point for executable binaries, invoked by the runtime after static initializers. For foreign function interfaces or custom linking, functions can be marked with #[no_mangle] to preserve names without Rust's symbol mangling, and Cargo's linker specifications in Cargo.toml or build scripts allow overriding the default entry for embedded or low-level targets.[68][69][70]
Swift employs the @main attribute to designate the program's entry point, applied to a type conforming to App for SwiftUI applications or a delegate class, streamlining launch from the 2010s onward with a focus on iOS and macOS ecosystems. Similarly, Dart relies on a top-level main() function as the entry point, returning void and optionally accepting command-line arguments, integral to its design for web and mobile apps since the 2010s. The D language also centers on main() as the entry point for console programs, executed after module initializers and unittests, supporting systems programming with C-like compatibility.[71][72]
In Node.js implementations of JavaScript, the entry point is the executed script file (often index.js), where top-level code runs immediately, and module.exports exposes functions or objects for modular reuse across files. GNAT, the Ada compiler, treats the main procedure as the primary entry point within the main task, with additional tasks enabling concurrent execution via task bodies activated at elaboration; custom entries require linker flags like -e.[73]
LOGO, an educational language from the 1960s, operates interactively without a strict compiled entry point, instead beginning execution via primitive commands like TO for defining procedures or direct invocation of built-in primitives such as FORWARD and REPEAT in the listener. QB64, a modern BASIC variant, defaults to executing from the first non-declaration line but supports SUB Main as an explicit entry subroutine for structured programs, callable at startup.[74][75]
Visual Basic applications start at the Main procedure, which controls initialization and often launches the startup form for GUI programs, as defined in project settings. In Xojo, desktop applications enter via the App class's Open event, which typically shows the initial window or form, supporting cross-platform development. Pike executes top-level code in scripts as the entry point, with no mandatory main function, aligning its dynamic nature for server and multimedia uses.[76]
For WebAssembly modules in the 2020s, especially under WASI, the entry point is often the exported _start function for command-line executables, invoked after imports and before other exports, enabling portable systems code. TypeScript transpiles to JavaScript, preserving the entry point as the compiled main function or top-level code in the output, configurable via tsconfig.json for module systems like CommonJS or ES modules.[77][78]