RCC History

The Research Computing Center continues a legacy of almost 40 years of high performance computing at Florida State University.

Supercomputing came to the Florida State University campus through the auspices of the Supercomputer Computations Research Institute (SCRI), which began operating in 1984. The U.S. Department of Energy established the center in response to a nationwide discussion on the need to advance research in a variety of fields, all of which required large-scale computers. SCRI ushered in a period in which several national supercomputing centers were established and which most research universities began investing in high performance computers.

Over the years, several supercomputing configurations came to Florida State, all of which have played a focal role in the use of computers to advance science and engineering and in the development of algorithms and software that take advantage of these computational resources. The SCRI was a pioneering venture for the nation that continues to this day at FSU.

SCRI Established


Supercomputing Computations Research Institute (SCRI) — opened at Florida State University.

CDC Cyber 205


2 Vector Pipelines, 0.4 GFLOPS Peak Speed, 32 MB Memory

In March 1985, the SCRI took delivery of its first supercomputer, a CDC Cyber 205, along with a front-end file server CDC Cyber 835. The Cyber 205 system included: CPU with a 20-nanosecond clock cycle, two vector pipelines, 32 megabytes of central memory, and 7.2 gigabytes of on-line disk storage. The Cyber 205 had a theoretical peak performance of 200 MFLOPS, and a LINPACK rating of 17 MFLOPS. The Cyber 835 system added another 20 gigabytes of on-line disk. In addition, four 6250 bpi on-line tape drives were shared between the Cyber 205 and the Cyber 835. Communications between the Cyber 205, its peripherals, and various front-end mainframes were handled by a Loosely-Coupled Network (LCN) consisting of four separate coaxial trunks. In its final configuration, the LCN connected the Cyber 205 to two CDC Cybers, two DEC VAX computers, the ETA-10, and an IBM mainframe.

By April 1985, the Cyber 205 was running production code from local researchers and DOE researchers around the country. The operating system software was soon upgraded to VSOS 2.2, providing more features and increased stability. Languages supported were: FORTRAN (with vectorizing pre-processor), C with vector extensions, and the Cyber 205 assembly language. Numerous mathematical packages such as CERN, IMSL, and MAGEV were installed, as well as the DI-3000 and NCAR graphics packages.

The CDC Cyber 205 served FSU Researchers from March 1985 until October 1989.

ETA10 Machines

1987 - 1989

8 CPUs, 4 GFLOPS Peak Speed, 1.024 GB Memory

Installation of the first prototype ETA-10 processor began at the FSU Computing Center on January 5, 1987. The clock cycle time for this CPU was 12.5 nanoseconds, and within two weeks it was running a FORTRAN job transferred from the Cyber 205, in monitor mode. A second CPU arrived in the Spring, and by Summer, a four-processor (12.5 ns clock) configuration was in place. No user access was available at this stage, but the FSU installation team was able to perform some benchmarking and specialized purpose testing.

It was in the fall of 1987 that the machine was upgraded to full ETA-10E specifications with a 10.5 nanosecond clock cycle time, 4 million words local memory per CPU, 128 million words of shared memory, and 14.4 billion bytes of online disk. In October, an ISlNG model was running in multiprocessing monitor mode achieving a new world record for performance (6 giga-flips per spin).

During this time, FSUCC consolidated operations in December with the relocation to Sliger Building, Innovation Park. Additionally, SCRI staff began to evaluate and implement an ETA port of the UNIX operating system.

A single processor, air-cooled ETA-10Q (19 ns clock), was deployed on December 12 initially to help those ETA users with short term migration problems from the EOS operating system. The "piper," as the air-cooled ETA-10 was known, ran the latest version of EOS, namely 1.1C, and allowed two or three local researchers the chance to complete calculations that would not otherwise have been possible on a busy Cyber 205. The system proved to be a success for a user community of this size with production VSOS jobs.

Despite occasional problems, UNIX was working out much better for the FSU supercomputer community at large. The system was more versatile and supported more users at any one time than EOS was capable of. Often, a single processor would be running a dozen interactive sessions, three NQS local batch jobs, and the occasional background process. As a result, system stability has remained much the same as with EOS, which could only support a couple of jobs concurrently at best. The main difference has proven to be that, under UNIX, the ETA-10 was an interactive supercomputer while, under EOS, the ETA-10 could only be used as a remote batch machine.

Cray Y-MP/432


4 CPUs, 1.3 GFLOPS Peak Speed, 256 MB Memory

On November 15th, 1989, it was announced that an agreement had been reached between FSU, Control Data, and Cray Research that the existing ETA-10G would be exchanged for a comparably-equipped Cray Y-MP, to be manufactured and delivered by Cray in late February and early March of 1990. The first item on the timeline was training on UNICOS installation for Systems group members, which took place between February 19th and February 21st at Cray's training facility, located in Eagan, Minnesota. The Systems group trainees then went to Cray's manufacturing checkout facility in Chippewa Falls, Wisconsin, and, along with Cray analysts, installed UNICOS 5.1 on the yet-to-be shipped Cray Y-MP, serial number 1513.

On April 9th, as originally scheduled, the Cray became officially available for production with a pre-installed user base of files and user names; although, some researchers had been running production programs since April 5th.

CM-2 Connection Machine


65,536 CPUs, 5 GFLOPS Peak Speed, 2GB Memory

In 1990, SCRI also installed the CM-2 Connection Machine from Thinking Machines Corporation. This massively parallel computer had 65 thousand processors and 2 GB of memory. Problems, which require massively parallel architecture, finally achieved the gigaflop computational rates that are often a prerequisite for success. The classic example is the lattice gauge theory, which ran continuously on the CM-2 for a period of approximately nine years at five gigaflops sustained speed.

SGI Challenge XLS


18 CPUs, 6.5 GFLOPS Peak Speed, 4.096 GB Memory

FSU purchased two Silicon Graphics Power Challenge XLSs in 1995. The first of the machines had 10 MIPS R1000 RISC processors and 2,048 MB central memory for a peak speed of 3,600 Megaflops. The second machine had eight processors, with 2GB central memory, and a peak speed of 2,9000 Megaflops. The Challenge was a scalable, shared-memory processors (SSMP) system with the advantages of both shared-memory systems and distributed computing systems.



1,024 CPUs, 50 GFLOPS Peak Speed

In 1997 and 1998, SCRI Scientists Robert Edwards and Tony Kennedy, in collaboration with physicists from Columbia University, Ohio State University, Fermilab, and Trinity College, Dublin, designed an inexpensive Teraflop scale massively parallel supercomputer suitable for work in Lattice QCD. Three computers were constructed, one of which, a 1,024 node/50 gigaflop version, was installed at FSU.

The cost of the FSU 50 gigaflop computer was $175,000, which made it affordable to small research groups. The machine had three times the sustained speed of the other supercomputer (the Cray CM-2), and about a factor of 20 beyond anything else. The processors were widely used Texas Instruments Digital Signal Processors running at 50 MHz.

Origin 2000


18 CPUs, 9 GFLOPS Peak Speed, 4.5 GB Memory

The SCRI installed Silicon Graphics Inc. Origin 2000 in December 1998. A hypercube of Cray Links interconnected the Origin 2000's processors, each of which was a bi-directional 1.6 GB/second pathway. This enabled the system to move data around 6.4 Gigabytes per second.



680 CPUs, 2,253 GFLOPS Peak Speed, 297 GB Memory

In 2000, SCRI staff installed an IBM RS/6000 SP system, "Teragold." In 2002, "Eclipse," an IBM eServer p690, was added. This combined system was ranked 34th on the top 500 list of Supercomputers in June 2002. It was also the fastest academic research computer in the world at the time.

Shared HPC


1,536 CPU Cores, 15.1 TFLOPS Peak Speed, 15.1 TB Memory

In September 2007, the High-Performance Computing Cluster was commissioned as a multi-disciplinary, shared, expandable resource for FSU Researchers. The system is the most flexible, accessible, and functional computational research system ever deployed at FSU, and has been in continuous operation since 2007.

At its inception, the system consisted of 256 Dell PowerEdge SC1435 compute nodes and eight Dell PowerEdge 6950 login nodes. The system utilizes an Infiniband fabric to provide low latency access to all of the compute nodes for MPI jobs.

Around this time, the SRCI changed its name to the Shared-HPC facility. This group was managed under the Department of Scientific Computing, and oversight was provided by an Executive Board consisting of scientific researchers, HPC staff, and University administrators.

Research Computing Center


In 2013, the Shared-HPC became part of Information Technology Services. The number of full-time system support staff increased, and the name changed to Research Computing Center to reflect the new scope of the department. By this time, the RCC was supporting the operations of the HPC, an interactive computing system ("Spear"), a cloud-computing infrastructure ("SKY"), and a high-throughput cluster, Condor.

The Future

2023 and beyond

The RCC remains committed to supporting scientific research at FSU. In the past decade, the need for computational research facilities has skyrocketed all scientific domains. The RCC has expanded to support many departments at FSU, including biology, medicine, meteorology, engineering, business, and many others.