Search
Now showing items 11-20 of 21
MPI Introduction (SC '08 Education Program's Workshop on Parallel & Cluster Computing)
(2008-08-10)
WHAT IS MPI?
The Message-Passing Interface (MPI) is a standard for expressing distributed parallelism via message passing.
MPI consists of a header file, a library of routines and a runtime environment.
When you compile ...
N-Body Simulation and Collective Communication (SC '08 Education Program's Workshop on Parallel & Cluster Computing)
(2008-08-10)
An N-body problem is a problem involving N “bodies” – that is, particles (e.g., stars, atoms) – each of which applies a force to all of the others.
For example, if you have N stars, then each of the N stars exerts a ...
Supercomputing: An Interview with Henry Neeman
(2011-03-28)
Introduction: “Dr. Henry Neeman is the Director of the OU Supercomputing Center for Education & Research and an adjunct assistant professor in the School of Computer Science at the University of Oklahoma. . . . In addition ...
A Day in the Life of a Networking Professional
(2013-10-10)
What is it like to be a network professional? What are the day to day
experiences? What issues of professionalism, customer service, project
management, and culture does a network professional encounter? How can
you ...
Cluster Stack Basics
(2015-05-18)
Linux Cluster Institute (LCI) Workshop
May 18-22, 2015
University of Oklahoma, Thurman J. White Forum Building (OU Forum)
1704 Asp Avenue
Norman, Oklahoma 73072
This workshop was focused toward Linux system ...
The Tyranny of the Storage Hierarchy (SC '08 Education Program's Workshop on Parallel & Cluster Computing)
(2008-08-10)
What is the storage hierarchy?
--Registers
--Cache
--Main Memory (RAM)
--The Relationship Between RAM and Cache
--The Importance of Being Local
--Hard Disk
--Virtual Memory
Monte Carlo Simulation (SC '08 Education Program's Workshop on Parallel & Cluster Computing)
(2008-08-10)
An application is known as embarrassingly parallel if its parallel implementation:
can straightforwardly be broken up into roughly equal amounts of work per processor, AND
has minimal parallel overhead (e.g., communication ...
CS1303_Neeman interview V1
(2015-03-27)
March 27 2015:
A video of OSCER Director, Dr. Henry Neeman, being interviewed about supercomputing by Dr. Amy McGovern for an OU Computer Science course (CS1303).
Instruction Level Parallelism (SC '08 Education Program's Workshop on Parallel & Cluster Computing)
(2008-08-10)
What is Instruction-Level Parallelism?
--Scalar Operation
--Loops
--Pipelining
--Loop Performance
--Superpipelining
--Vectors
--A Real Example
Introduction to MPI
(2015-05)
"Introduction to MPI" (Henry Neeman)
LCI Workshop, Mon May 18 2015