OpenSFS provides a wide range of videos, powerpoint presentations, PDFs and other sorts of data and documentation related to our and our Participants open source file system activities. We provide them here on an as-is basis. Often, these materials arrive from events or meetings. If there is a presentation that you would like to get a hold of and you do not see it here, please contact us.
File system hero run best practices task White Paper “Zero to Hero” High-performance computing (HPC) systems provide value through their ability to parallelize computational jobs and reduce the time to reach solutions. It is natural to look for performance metrics; users need to know if a given system can run their job, managers want to know if they received good value from their vendor, and so on. When a system is procured, the vendor often provides peak performance numbers. These are based on the theoretical maximum for each component. Storage benchmarks can often run at 80% or more of the theoretical peak. Good results are almost never achieved on the first attempt. What follows here is a high‐level guide for improving benchmark results to get to the desired result.
Scalable parallel file systems I/O characterization 2015 survey The OpenSFS Benchmarking Working Group (BWG) aims to provide an open source file system benchmark suite and guidance on file system benchmarking best practices to satisfy the benchmarking requirements of the scalable parallel file system users and facilities. Towards this end, BWG aims to characterize the file system workloads deployed at various high-performance and parallel computing facilities and institutions. Using these gathered characteristics, the workgroup will identify and build the required I/O benchmarks to emulate these workloads and provide required documentation about the benchmark suite. An initial version of this effort has been published in 2014. Your assistance is needed once more to provide a better characterization of the file system workload at your facility by completing our survey. Thank you!