
Micro-credential in High Performance Computing
The purpose of the Micro-credential in High Performance Computing is to recognize the expertise gained by students during their studies in the areas of High-Performance Computing. Below is a listing of the required or core courses and electives.
Core Courses
- 101 Introduction to cluster computing: Linux, shell scripting, queuing systems, cluster architecture
- 303 GPU Programming
Elective Courses
- 201 Scientific Computing in C++
- 301 Parallel Computing with OpenMP
- 302 Parallel Computing with MPI
Cost
- Free for active University of Houston System (UHS) students, staff or faculty
- $250/course badge for non-UH individuals
4 Badges = 1 Micro-credential
How it works: Register and complete four HPE DSI courses. These should include 4 core courses listed and the HPE DSI will automatically award you the Micro-credential in Artificial Intelligence. The Badge for the micro-credential will be awarded at the end of each semester.
Core Course Descriptions
To receive the Micro-credential badge, complete the courses listed below in any semester. These courses will neither affect your GPA nor appear in your transcripts. The description for each course can be found below:
Introduction to cluster computing: Linux, shell scripting, queuing systems, cluster architecture
This course introduces participants to the computing environment found in UH High-Performance Computing clusters such as Carya, and Sabine, including directions on how to prepare HPC workflows, submit jobs to the queuing systems, and retrieve results. Other topics covered include general HPC concepts, cluster system architecture, system access, customizing your user environment, compiling and linking codes for CPUs or GPUs, the SLURM batch scheduling system, batch job scripts, submission of sequential or parallel (GPU/CPU) jobs for several HPC applications including MATLAB, R, Python, NAMD to the batch system. Topics covered in Linux include user accounts, file permissions, file system navigation, the Command Line Interface (CLI), command line utility programs, file and folder manipulation, and standard text editors. Topics covered in Shell scripting include built-in commands, control structures, file descriptors, functions, parameters and variables.
This course introduces participants to the world of general-purpose computation on graphics processing units. Topics covered in this course include programming GPUs using high-level programming languages used in High-Performance Computing environments - MATLAB, Python, and directive-based approaches like OpenACC. Programming with CUDA, which makes it easier for specialists in parallel programming to use GPU resources, will also be covered.
Elective Courses
To receive the Micro-credential badge, complete the courses listed below. These courses will neither affect your GPA nor appear in your transcripts. The description for each course can be found below:
C++ is one of the most widely used programming languages, particularly in the STEM fields. Various C++ compilers are available for the majority of computer architectures and operating systems. This course will provide skills to understand and write C++ code. There will be many hands-on sessions to learn how to write, compile, and debug some C++ code comfortably. You will understand and use the basic constructs of C++; manipulate C++ data types, such as arrays, strings, containers, and pointers; isolate and fix common errors in C++ programs; use memory appropriately, including proper allocation/deallocation procedures; apply object-oriented approaches to software problems in C++, making use of structs, classes, and objects. Some of the newest features of C++ will also be reviewed.
Parallel Computing with OpenMP
In today’s multicore world, one-socket single-core systems are almost extinct. Reasonable performance gains could be extracted from these new many-core systems by employing standardized shared-memory parallel programming methods like OpenMP. This tutorial introduces shared-memory parallel programming using the OpenMP parallel application programming interface. It includes a quick introduction to writing parallel code by implementing OpenMP directives. Topics covered include the fork and join execution model, data scoping, work-sharing, reductions, synchronization, task parallelism, accelerator offloading, OpenMP functions, etc. Several examples of modifying serial code to run in parallel will be presented. Participants with basic programming experience are welcome. Upon completion of this course, users should be able to write parallel applications using the OpenMP directives.
The Message Passing Interface (MPI) Standard is a standardized library for exchanging messages between multiple computers running a parallel program across distributed memory. The goal of the Message Passing Interface is to establish a portable, efficient, and flexible standard for message-passing that will be widely used for writing message-passing programs. Well-written MPI-based applications can take advantage of the scalable processing power offered by distributed computing clusters such as UH’s HPE DSI clusters. The goal of this course is to teach those unfamiliar with MPI how to develop and run parallel programs according to the MPI standard. Topics covered will include MPI routines such as MPI environment management, point-to-point communications, collective communications routines, and one-sided communications. Advanced topics such as MPI data types for message passing, groups and communicators, virtual topologies, and hybrid programming will also be covered.