Fact-checked by Grok 2 weeks ago

Cray Operating System

The Cray Operating System (COS), also known as the Chippewa Operating System, is a , batch-oriented operating system developed by Cray Research, Inc., designed specifically for its pioneering supercomputers such as the , introduced in 1976, and the subsequent series. It provided essential multiprogramming and features to manage complex, resource-intensive scientific and engineering workloads, including vector processing and high-speed data transfers, while operating in a rudimentary batch environment without native interactive capabilities. Introduced alongside the —the world's first commercially successful vector supercomputer, capable of up to 160 megaflops— served as the foundational software layer for early Cray vector processor architectures, handling system startup, memory allocation across interleaved banks, and I/O operations via channels supporting up to 100 Mbytes/second on systems like the X-MP. Its modular design allowed customization for security, accounting, and data management, with support for temporary and permanent datasets, making it suitable for long-running batch jobs in research institutions and government labs. By the mid-1980s, as systems evolved to include the , began incorporating advanced features like the Guest Operating System (GOS) capability from release 1.15 onward, enabling concurrent execution of the emerging UNICOS—a Unix System V-derived OS—within environments to facilitate smoother transitions for users. COS's legacy lies in its role as the enabler of groundbreaking computational performance during the and , supporting applications in fields like , simulations, and weather modeling on with up to 512 MB of central memory and multiple CPUs. Although phased out in favor of more interactive Unix-based successors like UNICOS by the late , its architecture influenced subsequent Cray software stacks, and the acronym COS has been repurposed in modern HPE Cray systems for a SUSE Linux-based operating environment tailored to exascale supercomputing.

History

Development origins

The development of the Cray Operating System (COS) began at Cray Research in 1974-1975, parallel to the design of the , to provide a tailored software environment for its vector-based hardware architecture. Founded by in 1972 after leaving (CDC), the company recognized the need for an operating system optimized for high-performance scientific , emphasizing efficient and minimal overhead to handle demanding batch workloads such as simulations in and . This effort addressed the shortcomings of prior systems, including the CDC 6600's operating system, which struggled with the scale and speed requirements of emerging supercomputing applications. COS drew significant influence from the CDC operating system, adapting its framework to the unique demands of vector supercomputing. , developed for CDC mainframes, provided a foundation in job scheduling and multiprogramming that Cray Research modified to support up to 63 concurrent jobs on the , prioritizing low-latency I/O and for compute-intensive tasks. This adaptation stemmed from 's experience at CDC, where he contributed to earlier supercomputers, ensuring COS could leverage proven concepts while overcoming limitations in handling vector operations and large datasets in scientific environments.

Release and evolution

The Cray Operating System (COS) was initially released in 1975 in conjunction with the delivery of the supercomputer, with first installations occurring at sites like in 1976. Early versions of COS were provided to support the vector processing capabilities of the , enabling efficient operation for tasks. In 1978, Cray Research introduced the first standard software package for the , which included COS alongside a vectorizing compiler (CFT) and an assembler, standardizing the software environment across installations. COS evolved significantly to accommodate subsequent hardware advancements, particularly with the introduction of the in 1982, incorporating enhancements for multi-processor configurations and improved resource management in shared-memory environments. The operating system received ongoing updates, with version 1.17 representing a mature iteration that supported both and X-MP systems through the late 1980s; version 1.13's was released into the . The final documented release, version 1.17.2, occurred in July 1990, marking the culmination of development efforts. COS saw widespread adoption among major scientific computing users, including national laboratories such as and Livermore, where it facilitated complex simulations in fields like and . The system supported up to 32 simultaneous users through terminal connections via networks like UNINET, allowing interactive access for job submission and monitoring in a batch-oriented environment. COS was discontinued in the early 1990s as Cray Research transitioned to Unix-based systems like UNICOS, which had been introduced in 1985 for newer architectures; its total active lifespan spanned from 1975 to 1990.

Design

Core architecture

The Cray Operating System () is fundamentally a batch-oriented operating system designed for sequential execution of jobs on supercomputers, lacking support for multi-user to prioritize efficient resource utilization in a environment. Jobs are submitted in sequences via control statements from front-end processors and processed one at a time or in limited multiprogramming sets, with the system advancing through job steps until completion or termination. This design enables the COS to handle up to 63 concurrent jobs in memory on supported hardware, focusing on without interactive user access to the main CPU. At its core, COS manages system resources through dynamic allocation mechanisms tailored to the and X-MP architectures, including integration with vector processing capabilities optimized by the system's compiler. allocation supports up to 1 million 64-bit words per job, typically in 512-word blocks, with provisions for rolling jobs out to disk when resources are constrained. Mass storage is allocated via components like the Dataset Catalog and Disk Queue Manager, handling devices such as magnetic tapes through preallocation or dynamic methods, while is distributed based on job priority, class, and historical usage patterns, often favoring I/O-bound tasks with time slices. Accounting records are meticulously maintained in structures such as the Job Accounting Table and Job Communication Block to track resource consumption, including , channel interrupts, and task executions, ensuring precise billing and system auditing. Overall system control resides in the EXEC component, which monitors resources like the Cray-1's 1 million-word and 12 I/O channels without permitting direct user interaction on the primary CPU, instead relying on exchange packages and semaphores for internal coordination. This ensures reliable operation across single-processor and multi-processor configurations, with synchronization mechanisms, such as semaphores, to manage access in multi-processor configurations and prevent conflicts during use on X-MP systems.

I/O and front-end integration

The Cray Operating System (COS) relied on decoupled front-end computers, such as the and System/370 or DEC VAX systems, to handle job submission, program compilation, and peripheral I/O tasks, thereby isolating these operations from the main supercomputer and allowing its central processing units (CPUs) to concentrate exclusively on high-performance vector and scalar computations. This architecture ensured that the computationally intensive Cray hardware remained unburdened by slower I/O activities, which were offloaded to the more conventional front-end processors. Interactive and batch job entry occurred via remote s connected to the front-end systems, enabling multiprogramming with up to 255 active user programs—including interactive sessions for editing and debugging—while providing no direct terminal access or interactivity on the itself. Users interacted with the system through these front-ends, which staged jobs and datasets for transfer to the , aligning with the batch-oriented model of that prioritized efficient for compute-bound workloads. The I/O Subsystem (), comprising multiple I/O Processors () and buffer memory, managed datasets through dedicated interfaces like the eXtended I/O Processor (XIOP), supporting high-speed transfers to and from peripherals without involving the main CPUs. Disk I/O was handled locally within each job's context using processors such as the Buffer I/O Processor (BIOP) and Disk I/O Processor (DIOP), operating over 100 Mbyte/s channels to data blocks (typically 512 64-bit words) and reduce and overhead on the central computation pipeline. Communication between front-end systems and the utilized message-passing over dedicated high-speed links, including Front-End Interfaces (FEIs) with fiber-optic options extending up to 3,280 feet and High-Speed External (HSX) channels at 100 Mbyte/s, facilitating asynchronous data staging and job control without disrupting ongoing vector processing operations. This design emphasized reliability and throughput, with the Master I/O Processor (MIOP) coordinating exchanges to maintain seamless .

Features

Data handling mechanisms

In the Cray Operating System (COS), disk datasets are managed as local entities tied to individual job executions, ensuring efficient resource utilization by automatically deleting them upon job completion unless explicitly preserved. Local datasets that are neither designated as permanent nor manually disposed of are classified as scratch datasets, which are released and cease to exist at the end of the job to free up storage space. Permanent datasets, in contrast, are retained across multiple jobs and sessions through user-initiated commands, providing for intermediate results or shared in computational workflows. These datasets are created using the control statement within the , allowing users to assign symbolic names and specify access modes for ongoing use. Magnetic tape handling in COS is facilitated by the I/O Subsystem, which supports datasets in interchange format for with peripherals, including labeled tapes that adhere to conventions for and integrity. This subsystem enables multi-volume tape operations, allowing seamless spanning of large scientific datasets across multiple reels to accommodate volumes exceeding single-tape capacities while maintaining data ordering via and block numbering.

Job management capabilities

The Cray Operating System (COS) facilitated job submission through a Job Control Language (JCL) modeled after IBM systems, enabling users to define job parameters, resource requirements, and execution directives via structured control statements. Jobs were typically submitted as decks beginning with a mandatory JOB statement, which included essential parameters such as the job name (JN, 1-7 alphanumeric characters), memory limits (M), time limits (T), priority (P, ranging from 0 to 15 where 0 prevented initiation), user class (US), and the number of processors (e.g., *gn=np). The ACCOUNT statement was required in the CS dataset to specify billing identifiers, while additional control statements in the IN dataset handled input data, source code, and operations like dataset creation or program execution. This front-end JCL processing, often via the Control Statement Processor (CSP), allowed for resource limits such as CPU time and memory allocation to be explicitly set, ensuring controlled access to the system's vector processing capabilities for compute-intensive workloads. Scheduling in COS was managed by the Job Scheduler (JSH), a dedicated task within the Station Task Processor (STP) that oversaw queue management, resource allocation, and job prioritization to optimize throughput on the Cray-1's multiprogramming environment. JSH scanned the System Dataset Table (SDT) input queue for eligible jobs, assigning them to installation-defined classes via the Job Class Manager (JCM) based on the CL parameter in the JOB statement, with defaults favoring the highest-rank class that matched job requirements. Prioritization combined job-specific factors like the P parameter, user class rank, and estimated runtime—derived from historical usage or directives—with system parameters such as time slice adjustments (e.g., JXTS = I@JSTS3 + I@JSTS2P + I@JSTS1P² + I@JSTSO*P³) to favor higher-priority or I/O-bound jobs while balancing CPU-bound tasks through round-robin scheduling. Jobs were placed in the CPU waiting queue after JSH prepared the Job Table Area (JTA) and user field, with execution limited to 256 concurrent entries in the Job Execution Table (JXT); operator commands like JSTART or JRERUN could intervene to adjust queue positions or states. This mechanism ensured efficient multiprogramming, supporting up to multiple overlapping jobs while adhering to resource constraints like tape device availability tracked in the Tape Device Table (TDT). COS supported job chaining and dependencies through nested control structures in JCL, allowing sequential or conditional execution of procedures and programs to model complex workflows in batch environments. Procedure definitions used PROC and ENDPROC blocks, invoked via CALL statements with up to seven nesting levels, enabling reuse of common sequences like setups or program chains; statements concluded invocations, while alternate datasets could be specified for flexibility. Dependencies were enforced with conditional blocks (IF, ELSE, ELSEIF, ENDIF) based on error codes or resource status, and iterative loops (, ENDLOOP) for repetitive tasks; the RECALL macro further delayed processing until I/O operations completed, preventing premature execution. These features facilitated fault-tolerant pipelines, such as linking simulation steps where output from one job served as input for the next, without requiring intervention. Accounting in COS tracked resource consumption for billing and auditing via integrated logs that captured detailed usage metrics during job execution. The CHARGES statement in JCL specified reporting options, including (in floating-point seconds via the JTIME ), usage (MM for maximum allocation), I/O operations (WT for wait time, DS for sectors accessed, TPS for tape passes), and non-buffer counts (NBF), with data appended to the LOG [dataset](/page/Data_set) and summarized in the OUT at termination. JSH maintained these records in the JTA (words 71-72 for metrics) and system logfiles, using utilities like DUMPJOB for post-execution analysis; this ensured precise attribution of costs, such as CPU cycles on units and I/O throughput, to accounts for high-volume scientific computing. For in long-running compute-intensive tasks like hydrodynamic simulations, integrated checkpoint and restart mechanisms to recover from without full job abortion. The statement in JCL enabled automatic rerunnability, triggering job resubmission on non-fatal like faults, while the ROLL and ROLLJOB macros wrote the current job state to the $ROLL on disk for later . Reprieve processing via SETRPV and CONTRPV macros suspended execution temporarily for handling, allowing from the last stable point; the NORERUN option disabled this for short jobs. These capabilities, coordinated by JSH during roll-in/roll-out, minimized downtime in batch queues by preserving task states and dependencies, though they relied on operator intervention for severe issues.

Components

EXEC system

The EXEC system in the Cray Operating System (COS) functions as a lightweight message-passing layer that coordinates system communications among front-end systems, I/O devices, and the main CPU. It enables inter-process messaging through mechanisms such as channel management tables—including the Channel Buffer Table (), Channel Header Table (CHT), and Logical I/O Table (LIT)—which process 6-word command and reply packets for tasks like requests (e.g., ROOS, RO11, R022). This layer handles control signals and status updates via handlers such as IOI for I/O completion, TEI for errors, EE for exchange errors, and ME for errors, ensuring seamless data flow in the system's high-performance architecture. EXEC is engineered for minimal overhead in high-speed environments, utilizing short-term locks, packages (e.g., Job Table Area or JTA, System Dataset Table or SDT), and interrupt-driven short-burst processing to manage resources without impeding core computations. It supports asynchronous operations through task scheduling, Circular I/O (CIO) routines, and request-reply pairs like PUTREQ/GETREPLY, along with functions such as FTASK and FDLY, which prevent blocking of vector processing on the CPU by allowing non-blocking I/O and delayed executions. These features optimize performance in vector-oriented workloads typical of systems. The system integrates directly with hardware interrupts to provide responses, employing handlers for events like interprocessor interrupts (IPI), application interrupts (APIIP), and I/O subsystem polling via SYSWAIT, which trigger immediate actions without full context switches. EXEC plays a key role in job initiation by processing signals from the (JSH) and setting flags like TCEPJ to start executions, while also managing error reporting through routines such as F$CRASH for crash dumps, the (MEL) table for hardware faults, and job logfiles for timestamped status entries. Over time, EXEC evolved to accommodate multi-CPU configurations in systems like the , with enhancements in versions such as COS 1.13 (February 1984) introducing multitasking support, semaphores (e.g., SM@ALOCK for allocation locks, SM@PLOCK for processor locks), and interprocessor communication via requests like PSWITCH for CPU switching and IPCPU for processor-specific messaging. These updates, including bidirectional transfer parameters (BT) in MODE statements and expanded instruction buffers (40 words versus 20 for ), improved synchronization and scalability across multiple processors.

STP tasks

The System Task (STP) in the Cray Operating (COS) serves as a dedicated for handling non-compute supervisory tasks, enabling efficient management of system resources without interfering with the main computational workload on the or subsequent systems. It resides in lower memory alongside other core COS components and operates in user mode to process asynchronous tasks related to job control, I/O operations, and error handling, ensuring the operating system's stability and responsiveness. STP runs multiple concurrent tasks, each designed for specific supervisory functions, with key examples including the Exchange Processor (), which manages data exchanges between user programs and front-end systems for communication and error processing; the Job Scheduler (), responsible for queue handling, job flow control, and to multiple jobs; and the Disk Queue Manager (DQM), which oversees I/O queuing for disk operations and dataset management. Other essential tasks encompass the Tape Queue Manager (TQM) for coordinating tape I/O and queue operations, the Accounting Processor Manager () for tracking resource usage, peripheral management, and access permissions, and the Error Recovery Manager () for detecting, reporting, and recovering from system errors, including job reruns. These tasks exemplify STP's role in partitioning supervisory duties to maintain high system throughput. Each STP task operates independently, communicating with the EXEC system through messaging mechanisms such as task creation requests (CTSK), release requests (RTSK), and function codes for actions like job initiation (JSTART) or abortion (JABORT), allowing asynchronous execution without blocking other processes. The full suite comprises approximately 20-30 specialized routines dedicated to system maintenance, including additional tasks like the Station Call Processor (SCP) for front-end interactions, Permanent Dataset Manager (PDM) for data persistence, and System Performance Monitor (SPM) for ongoing diagnostics, though the exact count can vary by configuration with around 15 core tasks commonly active. This modular design prioritizes efficiency, as STP shares limited resident memory—typically allocated dynamically in blocks starting from several kilobytes for tables and routines—constraining tasks to essential operations within the overall lower memory footprint of COS.

Legacy

Transition to successors

The transition from the Cray Operating System (COS) to its Unix-based successor, UNICOS, began with the introduction of UNICOS on the supercomputer in 1985, marking the start of a phased replacement strategy. This shift accelerated with the series in 1988, which adopted UNICOS as its primary operating system, while older systems underwent a migration program to support UNICOS alongside COS. To facilitate this evolution, Cray Research implemented a Guest Operating System (GOS) feature in COS, enabling dual-operation modes where UNICOS could run as a guest under COS or vice versa on compatible hardware, easing the software transition for existing installations. The primary motivations for replacing COS stemmed from the growing demand in 1980s supercomputing for interactive, multi-user environments and adherence to standards, which COS's batch-oriented design could not fully support. As a batch system optimized for single-user, high-throughput job processing on early vector machines, COS faced limitations in handling concurrent interactive sessions and modern networking protocols, rendering it increasingly outdated amid evolving user needs for collaborative computing. UNICOS, derived from AT&T's , addressed these gaps by providing a full-featured, interactive platform with enhanced multi-user capabilities. COS saw its last major deployments in the early 1990s, primarily on legacy and X-MP installations where stability and compatibility with existing workloads justified continued use, with the final COS release (version 1.17.2) occurring in 1990. By the mid-1990s, as these older systems were decommissioned and fully supplanted by UNICOS-equipped architectures like the Y-MP and C90, COS was entirely discontinued in favor of the more versatile Unix-based ecosystem. During this bridge period, UNICOS preserved continuity by incorporating key COS elements, such as compatibility for vectorizing compilers and migration tools that allowed seamless porting of COS-developed Fortran applications, ensuring minimal disruption to scientific workloads. This integration highlighted COS's foundational influence on Cray's operating system evolution, even as the company pivoted toward open standards.

Modern availability

During its active deployment in the and , the Cray Operating System () operated under a from Cray Research, restricting access to licensed customers and internal use. By the late , version 1.13 was designated as a public version of the OS, facilitating broader distribution for compatibility and testing purposes, though no surviving copies of its have been publicly documented. Version 1.17, the final major release from around 1990, became available through the in the early following a that extracted a from a CDC 9877 pack used on systems. Modern access to relies heavily on projects that replicate the environment of Cray systems. The open-source Cray PVP Simulator, developed by Andras Tantos, emulates the X-MP and Y-MP architectures, enabling the execution of unmodified COS 1.17 binaries on contemporary without requiring original front-end systems. This tool supports hobbyist experimentation by simulating vector processing units, I/O processors, and peripherals like disk and tape drives, allowing users to boot and run COS workloads interactively via SSH or browser interfaces. Earlier efforts, such as the xmpsim DOS-based simulator from the , laid groundwork for these advancements but are now superseded by more comprehensive emulators. Preservation initiatives maintain through archival repositories hosting manuals, guides, and system documentation, primarily at Bitsavers.org and dedicated sites. These resources include operational procedures, manuals for versions up to 1.17, and internal design documents, supporting computational research into early supercomputing architectures. As of 2025, such materials aid academic projects, including efforts to recreate programming environments for historical analysis. No commercial support exists for , with usage confined to academic institutions and enthusiast communities focused on software heritage and emulation.

References

  1. [1]
    Unicos and other operating systems - Cray-History.net
    Aug 14, 2021 · Unicos is the operating system(s) that enabled application software and users to exploit Cray hardware. Unicos provided a Unix based ...
  2. [2]
    [PDF] eRA Y X-MP EA Computer Systems Functional Description Manual
    1 -CRAY X-MP EA COMPUTER SYSTEM OVERVIEW ... cos -Cray operating system. COS is a multiprogramming, multiprocessing, and ...
  3. [3]
    Cray Inc. Company (Unicos) - Operating Systems
    Dec 22, 2023 · Since 1951 Seymour Cray (1925-09-28 to 1996-09-05) was mainly and with great engagement engaged in the development of high-performance computers.
  4. [4]
    [PDF] The Cray-1 Computer System, 1977
    The CRAY Operating System is a group of memory or disk resident programs that manages the resources, supewises the running of jobs, and performs inputloutput ...
  5. [5]
    CDC 6600 CPU cabinet (1 of 4) - X1385.97F - CHM
    Price: $7,000,000 Size: 400 cubic feet Software: COS (Chippewa Operating System) ... Designed by Seymour Cray, Jim Thornton, and a small team of engineers ...
  6. [6]
    [PDF] The CRAY-1 Computer System - People @EECS
    At the time of this writing, first releases of the. CRAY Operating System (cos) and CRAY Fortran. Compiler (CFT) have been delivered to user sites. cos is a ...
  7. [7]
    CRI Cray-1A S/N 3 | Computational and Information Systems Lab
    The Cray-1A was very stable compared to the CDC 7600, representing a significant improvement in mean-time-to-failure rates.Missing: testing | Show results with:testing
  8. [8]
    CCD::Cray XMP - Chilton Computing
    4.1 The Cray Operating System. The Cray Operating System (COS) is a mature system which executes on the full line of the earlier CRAY-1's and the current CRAY ...
  9. [9]
    [PDF] COS™ Table Descriptions Internal Reference Manual - Bitsavers.org
    This manual contains information for making the transition from the external features of the operating system as described in the COS. Reference Manual, Cray ...
  10. [10]
    Cray Operating System - ArchiveOS
    May 6, 2022 · The Cray Operating System (COS) succeeded Chippewa Operating System (shipped with earlier computer systems CDC 6000 series and CDC 7600) and is ...
  11. [11]
    First Cray-1 Supercomputer Is Shipped to the Los Alamos National ...
    The Cray-1 supercomputer, developed by Seymour Cray and his team at Cray Research, marked a significant advancement in computing technology when it was shipped ...
  12. [12]
    Cray Supercomputers - Lawrence Livermore National Laboratory
    These were followed by the Cray 1 beginning in early 1979 (photo above). Among our current major computers are about a half dozen of the Crays, and these are ...
  13. [13]
    [PDF] IMSL on the CRAY-i,.................:.............................. - OpenSky
    * The UNINET PAD's are configured to support user terminals with the stan- ... to support a maximum of 32 simultaneous UNINET users, and all 32 positions ...
  14. [14]
    [PDF] CRAY-1® AND CRAY X-MP COMPUTER SYSTEMS - Bitsavers.org
    This manual describes the internal features of the EXEC, STP, and CSP portions of the Cray Operating System. This publication is part of a set of manuals that ...
  15. [15]
    [PDF] The CRAY- 1 Computer System - cs.wisc.edu
    CRAY Operating System (cos) and CRAY Fortran. Compiler (cFr) have been delivered to user sites, cos is a batch operating system capable of supporting up to ...
  16. [16]
    [PDF] CRAY-1 - Bitsavers.org
    COS is a collection of programs that resides in CRAY-1 memory or on system mass storage following startup of the system. (Startup is the process of bringing ...
  17. [17]
    RESEARCH J INC. - Bitsavers.org
    May 16, 1983 · New features for version 1.02 of the operating system that are documented in this revision include: addition of the MODIFY control statement and ...
  18. [18]
    [PDF] CRAY-1® COMPUTER SYSTEMS - Bitsavers.org
    The CRAY-l Operating System (COS) is a multiprogramming operating system for CRAY-l Computer Systems. The ope~ting system provides for efficient use of ...
  19. [19]
    [PDF] NASTRAN User's Colloquium (12th), Held in Orlando, FLorida on ...
    required on restart but is not available from the ... checkpoint/restart run on the CDC version. (This ... computers that operate under the CRAY operating system ( ...<|control11|><|separator|>
  20. [20]
    Basic JCL for the CRAY-1 operating system (COS) with emphasis on ...
    Jul 1, 1983 · Similarities and differences in the basic JCL are summarized, and a dozen or so examples of typical batch jobs for the two systems are shown in ...Missing: management | Show results with:management
  21. [21]
    None
    Below is a merged summary of EXEC in the Cray Operating System (COS) from SR-0011L, consolidating all provided segments into a single, comprehensive response. To retain maximum detail and ensure clarity, I’ve organized the information into a table format where appropriate, followed by a narrative summary for aspects that are better suited to prose. The table captures recurring themes (e.g., Role, Messaging, Design, etc.) across the segments, while the narrative addresses unique details and evolution specifics.
  22. [22]
    [PDF] Untitled - Cray-History.net
    The Cray Operating System (COS) is a multiprogramming operating system designed to process jobs which are submitted from one or more front-end computers ...
  23. [23]
    Supercomputer Operating Systems | Encyclopedia MDPI
    Nov 29, 2022 · The first Cray-1 was delivered to the Los Alamos Lab with no operating system, or any other software. Los Alamos developed the application ...
  24. [24]
    [PDF] The CRAY-MP Series of Computer Systems
    The Cray operating system COS is also available, allowing programs developed in the CRAY X-MP/COS environment to use up to four processors and 16 mil- lion ...
  25. [25]
    [PDF] The Cray Extended Architecture Series of Computer Systems, 1988
    The Guest Operating System feature of COS allows users to run UNICOS from within COS. Standard software also includes an automatic scalar optimizing and ...
  26. [26]
    [PDF] operating system - for Cray supercomputers - Bitsavers.org
    A primary function of every Cray computer system is the efficient processing of long-running pro- grams requiring large memory and high I/O bandwidth.
  27. [27]
    [PDF] Chemical calculations on Cray computers
    COS, UNICOS and CTSS. COS is a batch-job-oriented system that runs on the. X-MP (and the CRAY-1). UNICOS is an interactive systemfor the CRAY-1,. X-MP, Y-MP ...
  28. [28]
    FAQ-1 Cray Supercomputer families 1978 .. 1999
    The first operating system produced by Cray Research for its machines was Cray Operating System (COS). ... According to the Unicos 1.0 software release ...
  29. [29]
    Cray Publications and technical manuals lists - Cray-History.net
    Cray Publications and technical manuals lists · 1 Why UNICOS. · 2 UNICOS Philosophy. · 3 Overview of UNICOS on Cray XMP . · 4 Cray Fortran Environments in COS and ...
  30. [30]
  31. [31]
    [PDF] Emerging Technologies Multi/Parallel Processing - Bitsavers.org
    Boeing will provide operating system software (COS 1.13, public version of Cray's operating system) and will offer its line of engineering and scientific ...
  32. [32]
    COS Recovery - chrisfenton.com
    I'm talkin' about the real deal here – COS 1.17, the last version Cray released of the ill-fated operating system. I even managed to get it quickly enough that ...
  33. [33]
  34. [34]
    andrastantos/cray-sim: Cray PVP Simulator - GitHub
    This repository contains the source code for the Cray PVP simulator. This simulator allows you to run unmodified Cray software for the following machines.Missing: xmpsim | Show results with:xmpsim
  35. [35]
    Running Cray OS And UNICOS On Your Own Cray Simulator Instance
    Jan 23, 2023 · This simulator allows you to run software written for the Cray X-MP (1982), Y-MP (1988), J90 (1994) and SV1 (1998), which covers essentially all major Cray ...
  36. [36]
    The Cray Files - Modular Circuits
    These pages contain my project notes on how to build a Cray supercomputer at home. Computer history is a fascination of me so this project allows me to geek ...
  37. [37]
    Index of /pdf/cray/COS
    ### Available COS Manuals and Dates
  38. [38]
    Cray-History.net – A collection of materials about Cray branded ...
    From Cray-1 to Cray-T3E and beyond, the complex history of these premier supercomputers is explored. Join in and contribute to the archive with recollections, ...Missing: OS preservation computational
  39. [39]
    UW-Eau Claire student helps preserve supercomputing history
    Jul 25, 2025 · History student and HPE employee uses history research project to preserve Cray history.Missing: OS | Show results with:OS
  40. [40]
    [PDF] Cray Open Software (COS)
    Updated packages will include: bison, CVS, OpenSSH, OpenSSL, sed, sh- utils, autoconf, TeXinfo and gdb. The package is available in source and binary release ...Missing: public | Show results with:public