Leadership

Officers

Stephen Simms

OpenSFS President, Indiana University

Stephen Simms manages the High Performance File Systems group at Indiana University.  He has been in HPC for 20 years and has spent 14 of those years working with his team to use the high performance Lustre file system in production and pioneer its use across wide area networks.

Kevin Harms

OpenSFS Vice President, Argonne Leadership Computing Facility

Kevin joined the Argonne Leadership Computing Facility (ALCF) in 2007 working in the file system and storage area and that has remained his focus for the last 10 years. The ALCF has deployed several large GPFS and PVFS file systems during his tenure and Kevin has had various responsibilities from administering the day-to-day storage operations to performance tuning of application I/O. The initial deployments of the Intrepid and Mira GPFS file systems were the fastest GPFS file systems at the time. In 2016, ALCF deployed its first Lustre based file system and Kevin has been involved in the benchmarking and testing this new file system. As a member of the ALCF, Kevin has developed relationships within the HPC community and with key storage vendors. In addition to his role in the facility side, he has also spent a portion of his time working on I/O research as a member of Rob Ross’s I/O research team. Prior to joining ALCF, Kevin worked as a software developer in the Telecom industry.

Kirill Lozinskiy

OpenSFS Treasurer, LBNL/NERSC

Kirill Lozinskiy is a Senior HPC Storage Systems Analyst at the National Energy Research Scientific Computing Center (NERSC), where he works as a member of the Storage Systems Group. His interests include parallel file systems, object storage, and scalability. Kirill oversees the High Performance Storage System (HPSS), Lustre, and an integration project between GPFS and HPSS.Before coming to NERSC, Kirill was a Senior HPC Storage Systems Administrator at the Broad Institute, a biomedical and genomic research center, where he maintained over 30 PB of file and object storage and worked on HPC cloud computing initiatives.

Dominic Manno

OpenSFS Secretary, Los Alamos National Laboratory

Dominic Manno is a Scientist in High Performance Computing at Los Alamos National Laboratory. He has a background in storage systems and software development. Dominic’s career began in HPC storage at LANL with contributions ranging from maintaining systems to integrating and deploying new Lustre file systems. Dominic, as HPC File System Technical Lead, is responsible for the design, procurement, and administrative team for all parallel file systems in LANL’s HPC data center. He has worked with the Lustre file system since 2014 with work including Lustre administration, file system performance tuning, and improving user performance for high-performance computing applications. Mr. Manno also co-leads a subset of storage research efforts at LANL’s Ultrascale Systems Research Center. Some our latest work includes re-examining disk failures and protection at extreme scales, designing next generation storage system deployments, and exploring hardware offload integration within ZFS. This team is also responsible for the current work in implementing direct I/O functionality into the ZFS file system.

Board Members

Shawn Hall

At Large Board Member, BP

Shawn Hall’s experience is in large scale system administration, having worked with high performance computing clusters in industry and academia. He has worked on many aspects of large scale systems and his interests include parallel file systems, configuration management, performance analysis, and security.  Shawn holds a B.S. and M.S. degree in Electrical and Computer Engineering from Ohio State University.

Sarp Oral

At Large Board Member, Oak Ridge National Laboratory

Dr. Sarp Oral is a Research Scientist at the Oak Ridge Leadership Computing Facility (OLCF) of Oak Ridge National Laboratory, where he is a member of the Technology Integration Group and the Team Lead for the File and Storage Systems projects. His research and development interests are parallel I/O and file system technologies, benchmarking, high-performance computing and networking, fault-tolerance.