Difference between revisions of "Best practices on Lustre parallel file systems"
(→Best practices) |
|||
Line 15: | Line 15: | ||
==Best practices== | ==Best practices== | ||
+ | The Lustre file system is shared among many users. It is optimized for parallel I/O and large files. Please note that | ||
− | ===ls vs. ls -l=== | + | *working with a large number of small files |
+ | *running thousands of unnecessary I/O operations per second (running Open/Close in a loop) | ||
+ | *accessing the same file with hundreds of processes | ||
+ | |||
+ | will not only slow down your jobs. It can overload the entire file system affecting all users. Therefore please carefully read our best practices guide before using <tt>/cluster/work</tt> or <tt>/cluster/scratch</tt>. | ||
+ | |||
+ | ===Limit repetitive Open/Close operations=== | ||
+ | If you need to write a lot of values into a file as part of a loop, then there are multiple ways of achieving this task. Please make sure that you never put the open and close statements inside the loop as shown in this Python example: | ||
+ | |||
+ | for i in range(1000): | ||
+ | f=open('test2.txt', 'a') | ||
+ | f.write(some_data) | ||
+ | f.close() | ||
+ | |||
+ | This will cause that the same file is opened and closed 1000 times, which causes a total of 2000 I/O operations and 1998 of them are unnecessary. It is sufficient to open the file once, then write all values to it and close it at the end, resulting in only 2 I/O operations | ||
+ | |||
+ | f=open('test1.txt', 'w') | ||
+ | for i in range(1000): | ||
+ | f.write(some_data) | ||
+ | f.close() | ||
+ | |||
+ | ===Limit repetitive "stat" operations=== | ||
+ | If you are running a code that needs at some point to check if a file exists, it is sufficient to check for this every few seconds | ||
+ | |||
+ | ===Directory listings: ls vs. ls -l=== | ||
If you run the <tt>ls</tt> command for listing a file or a directory, then it will query the MDS for this information. But when running the command with the <tt>-l</tt> option, it will also need to access the OSS to look up the file size, which creates additional load on the storage system. | If you run the <tt>ls</tt> command for listing a file or a directory, then it will query the MDS for this information. But when running the command with the <tt>-l</tt> option, it will also need to access the OSS to look up the file size, which creates additional load on the storage system. | ||
Line 22: | Line 47: | ||
* Only use <tt>ls -l</tt> if you also need to know about the file size | * Only use <tt>ls -l</tt> if you also need to know about the file size | ||
− | === | + | ===Don't store a large number of files in a single directory=== |
===Avoid Accessing Small Files on Lustre Filesystems=== | ===Avoid Accessing Small Files on Lustre Filesystems=== | ||
Line 35: | Line 60: | ||
===Limit the Number of Processes Performing Parallel I/O=== | ===Limit the Number of Processes Performing Parallel I/O=== | ||
− | |||
− | |||
===Avoid Having Multiple Processes Open the Same File(s) at the Same Time=== | ===Avoid Having Multiple Processes Open the Same File(s) at the Same Time=== | ||
− | |||
− | |||
==Troubleshooting== | ==Troubleshooting== |
Revision as of 10:08, 12 February 2019
Contents
- 1 Introduction
- 2 Best practices
- 2.1 Limit repetitive Open/Close operations
- 2.2 Limit repetitive "stat" operations
- 2.3 Directory listings: ls vs. ls -l
- 2.4 Don't store a large number of files in a single directory
- 2.5 Avoid Accessing Small Files on Lustre Filesystems
- 2.6 Use a Stripe Count of 1 for Directories with Many Small Files
- 2.7 Avoid Accessing Executables on Lustre Filesystems
- 2.8 Increase the Stripe Count for Parallel Access to the Same File
- 2.9 Restripe Large Files
- 2.10 Limit the Number of Processes Performing Parallel I/O
- 2.11 Avoid Having Multiple Processes Open the Same File(s) at the Same Time
- 3 Troubleshooting
- 4 Working with stripes (advanced users)
Introduction
Lustre is a type of parallel distributed file system, generally used for large-scale cluster computing. Files are distributed across multiple servers, and then striped across multiple disks.
A Lustre file system has three major functional units:
- Metadata servers (MDS) that stores namespace metadata, such as filenames, directories, access permissions, and file layout.
- Object storage server (OSS) nodes that store file data on one or more object storage target (OST) devices.
- Client(s) that access and use the data.
When a client accesses a file, it performs a filename lookup on the MDS. When the MDS filename lookup is complete and the user and client have permission to access and/or create the file, then the layout of an existing file is returned a new file is created.
For read or write operations, the client then interprets the file layout, which maps the file logical offset and size to one or more objects, each residing on a separate OST. The client then locks the file range being operated on and executes one or more parallel read or write operations directly to the OSS nodes.
After the initial lookup of the file layout, the MDS is not normally involved in file IO operations since all block allocation and data IO is managed internally by the OST. Clients do not directly modify the objects or data on the OST filesystems, but instead delegate this task to OSS nodes.
Best practices
The Lustre file system is shared among many users. It is optimized for parallel I/O and large files. Please note that
- working with a large number of small files
- running thousands of unnecessary I/O operations per second (running Open/Close in a loop)
- accessing the same file with hundreds of processes
will not only slow down your jobs. It can overload the entire file system affecting all users. Therefore please carefully read our best practices guide before using /cluster/work or /cluster/scratch.
Limit repetitive Open/Close operations
If you need to write a lot of values into a file as part of a loop, then there are multiple ways of achieving this task. Please make sure that you never put the open and close statements inside the loop as shown in this Python example:
for i in range(1000): f=open('test2.txt', 'a') f.write(some_data) f.close()
This will cause that the same file is opened and closed 1000 times, which causes a total of 2000 I/O operations and 1998 of them are unnecessary. It is sufficient to open the file once, then write all values to it and close it at the end, resulting in only 2 I/O operations
f=open('test1.txt', 'w') for i in range(1000): f.write(some_data) f.close()
Limit repetitive "stat" operations
If you are running a code that needs at some point to check if a file exists, it is sufficient to check for this every few seconds
Directory listings: ls vs. ls -l
If you run the ls command for listing a file or a directory, then it will query the MDS for this information. But when running the command with the -l option, it will also need to access the OSS to look up the file size, which creates additional load on the storage system.
- Use ls if you would like to list files and directories
- Only use ls -l if you also need to know about the file size
Don't store a large number of files in a single directory
Avoid Accessing Small Files on Lustre Filesystems
Use a Stripe Count of 1 for Directories with Many Small Files
Avoid Accessing Executables on Lustre Filesystems
Increase the Stripe Count for Parallel Access to the Same File
Restripe Large Files
Limit the Number of Processes Performing Parallel I/O
Avoid Having Multiple Processes Open the Same File(s) at the Same Time
Troubleshooting
Working with stripes (advanced users)
Lustre will always try to distribute your data across all OSTs. The striping parameters can be tuned per file or directory.
How to display the current striping settings
The default stripe setting of a file or directory can be shown with the command lfs getstripe:
[sfux@eu-login-24-ng ~]$ lfs getstripe $SCRATCH/__USAGE_RULES__ /cluster/scratch/sfux/__USAGE_RULES__ lmm_stripe_count: 1 lmm_stripe_size: 1048576 lmm_pattern: 1 lmm_layout_gen: 0 lmm_stripe_offset: 3 obdidx objid objid group 3 619261 0x972fd 0 [sfux@eu-login-24-ng ~]$
For directories, use the -d option
[sfux@eu-login-24-ng ~]$ lfs getstripe -d $SCRATCH stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 [sfux@eu-login-24-ng ~]$
- stripe_count = -1 : Use the filesystem default stripe count (= spread data to all OSTs)
- stripe_size = 1048576 : Use 1 MiB stripe/chunk size
- stripe_offset = -1: Let Lustre choose the next OST (you shouldn't change this)
How to change stripe settings
The stripe setting of a directory can be changed with the command lfs setstripe.
Note!
- You can not change the striping of existing files
- You can always change the striping parameters of an existing directory
- It is possible to create files with non-default striping parameters with the lfs command
- A subdirectory inherits all stripe parameters from its parent directory (if not changed via lfs setstripe)