From ScientificComputing
Revision as of 08:07, 7 January 2022 by Jarunanp (talk | contribs)

Jump to: navigation, search

< Examples

AlphaFold2 predicts a protein's 3D folding structure by its amino acid sequence with the accuracy that is competitive with experimental results. This AI-powered structure prediction of AlphaFold2 has been recognized as the scientific breakthrough of the year 2021. The AlphaFold package is now installed in the new software stack on Euler.

Load modules

The AlphaFold module can be loaded as following.

$ env2lmod
$ module load gcc/6.3.0 openmpi/4.0.2 alphafold/2.1.1
Now run 'alphafold_init' to initialize the virtual environment

The following have been reloaded with a version change:
  1) gcc/4.8.5 => gcc/6.3.0

$ alphafold_init
(venv_alphafold) [jarunanp@eu-login-18 ~]$ 


The AlphaFold databases are available for all cluster users at /cluster/project/alphafold.

If you wish to download databases separately, you can see the instruction here.

Alternatively, users can download the databases to their personal $SCRATCH. Or, if there are several users of AlphaFold in your group, institute or department, we recommend to use a group storage /cluster/project. The total size of the databases for AlphaFold when unzipped is ~2.3TB.

Download the AlphaFold databases to your $SCRATCH

  • Download and install aria2c in your $HOME
$ cd $HOME
$ wget
$ tar xvzf aria2-1.36.0.tar.gz
$ cd aria2-1.36.0
$ module load gcc/6.3.0 gnutls/3.5.13 openssl/1.0.1e
$ ./configure --prefix=$HOME/.local
$ make
$ make install
$ export PATH="$HOME/.local/bin:$PATH"
$ which aria2c
  • Check if you have enough space in your $SCRATCH. You may need to free up your $SCRATCH in case there is not enough space.
$ lquota
| Storage location:           | Quota type: | Used:            | Soft quota:      | Hard quota:      |
| /cluster/home/jarunanp      | space       |         10.38 GB |         17.18 GB |         21.47 GB |
| /cluster/home/jarunanp      | files       |            85658 |           160000 |           200000 |
| /cluster/shadow             | space       |         16.38 kB |          2.15 GB |          2.15 GB |
| /cluster/shadow             | files       |                7 |            50000 |            50000 |
| /cluster/scratch/jarunanp   | space       |          2.42 TB |          2.50 TB |          2.70 TB |
| /cluster/scratch/jarunanp   | files       |           201844 |          1000000 |          1500000 |
  • Create a folder for the databases
$ mkdir alphafold_databases
  • Download the databases: you can call a script to download all the databases or call a script for each databases. These scripts are in the same directory $ALPHAFOLD_ROOT/scripts/.
$ bsub -W 24:00 "$ALPHAFOLD_ROOT/scripts/ $SCRATCH/alphafold_databases"

Submit a job

Here is an example of a job submission script (run_alphafold.bsub) which requests 12 CPU cores, in total 120GB of memory, in total 120GB of local scratch space and one GPU. This job is to fold a monomeric protein Ubiquitin (76aa).

#BSUB -n 12
#BSUB -W 4:00
#BSUB -R "rusage[mem=10000, scratch=10000, ngpus_excl_p=1]"
#BSUB -J alphafold

source /cluster/apps/local/
module load gcc/6.3.0 openmpi/4.0.2 alphafold/2.1.1
source /cluster/apps/nss/alphafold/venv_alphafold/bin/activate

# Define paths to databases, fasta file and output directory

python /cluster/apps/nss/alphafold/alphafold-2.1.1/ \
--data_dir=$DATA_DIR \
--output_dir=$OUTPUT_DIR \
--max_template_date="2021-12-06" \
--bfd_database_path=$DATA_DIR/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--uniref90_database_path=$DATA_DIR/uniref90/uniref90.fasta \
--uniclust30_database_path=$DATA_DIR/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--mgnify_database_path=$DATA_DIR/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path=$DATA_DIR/pdb70/pdb70 \
--template_mmcif_dir=$DATA_DIR/pdb_mmcif/mmcif_files \
--obsolete_pdbs_path=$DATA_DIR/pdb_mmcif/obsolete.dat \

# Copy the results from the compute node
mkdir -p output
cp -r $OUTPUT_DIR/* output

To fold a multimeric protein, the option --model_preset=multimer has to be called, and --pdb_seqres_database_path and --uniprot_database_path have to be set. The command to run AlphaFold becomes:

python /cluster/apps/nss/alphafold/alphafold-2.1.1/ \
--data_dir=$DATA_DIR \
--output_dir=$OUTPUT_DIR \
--max_template_date="2021-12-06" \
--bfd_database_path=$DATA_DIR/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--uniref90_database_path=$DATA_DIR/uniref90/uniref90.fasta \
--uniclust30_database_path=$DATA_DIR/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--mgnify_database_path=$DATA_DIR/mgnify/mgy_clusters_2018_12.fa \
--pdb_seqres_database_path=$DATA_DIR/pdb_seqres/pdb_seqres.txt \
--uniprot_database_path=$DATA_DIR/uniprot/uniprot.fasta \
--template_mmcif_dir=$DATA_DIR/pdb_mmcif/mmcif_files \
--obsolete_pdbs_path=$DATA_DIR/pdb_mmcif/obsolete.dat \
--model_preset=multimer \

Submit a job with the command

$ bsub < run_alphafold.bsub

The screen output is saved in the output file named starting with lsf.o followed by the JobID, e.g., lsf.o195525946. Please see this page for how to read the output file.

Benchmark results

AlphaFold2 uses HHsearch and HHblits from the HH-suite to perform protein sequence searching. The HH-suite searches do many random file access and read operations. Therefore, it is recommended to store the databases of AlphaFold on a solid state drive (SSD) due to the significantly higher input/output speed (IOPS) compared to a traditional mechanical hard disk drive (HDD).

We tested the performance of AlphaFold to fold two proteins (Ubiquitin with the length of 76 amino acids, T1050 with the length of 779 amino acids) reading the AlphaFold databases from our three central storage systems.

  • /cluster/scratch is a fast, short-term, personal storage system based on SSD
  • /cluster/project is a long-term group storage system which uses HDD for the permanent storage and NVMe flash caches to accelerate the reading speed
  • /cluster/work is a fast, long-term, group storage system based on HDD and suitable for large files

The tests ran on two of NVIDIA GPU models available on Euler including RTX 2080 Ti and TITAN RTX (see the GPU specs here). All jobs allocated 12 CPU cores, 1 GPU, the total memory of 120 GB and the total scratch space of 120 GB. The figures below show the benchmark results which are the average runtime of five runs for the tests with the databases on /cluster/scratch and /cluster/project. The tests with the databases on /cluster/work were run only once because the small reads on this storage system decrease significantly not only the performance of these particular tests but also the overall performance of the whole /cluster/work storage system. The tested compute nodes were not reserved for testing, i.e., the compute nodes might be loaded by other computational while the AlphaFold tests were running.

Benchmark ubiquitin 1gpu.jpg

Fig 1: The performance results of AlphaFold2 in folding the Ubiquitin structure

Benchmark T1050 1gpu.jpg

Fig 2: The performance results of AlphaFold2 in folding the T1050 structure

Alphafold ubiquitin.png

Alphafold T1050.png

Fig 3: This figure shows a cartoon representation of two superimposed ubiquitin structures. Ubiquitin is a small monomeric protein with 76 amino acids. The structure in blue has been determined experimentally (X-ray crystallography, pdb database code: 1upq.pdb). The model in green shows the structure predicted by AlphaFold2. The RMSD (root mean square distance) between the two structures is 0.797 A. The RMSD has been calculated for the backbone atoms. (Image and caption text by Dr. Simon Rüdisser, BNSP)

Fig 4: The five models of T1050 generated by AlphaFold2 are shown as cartoon representation. T1050 is a monomeric protein with 779 amino acids. T1050 is one of the targets from the CASP (Critical Assessment of Techniques for Protein Structure Prediction) initiative. (Image and caption text by Dr. Simon Rüdisser, BNSP)

From testing folding the two proteins with AlphaFold, /cluster/project shows to be the best choice as a group storage for the AlphaFold databases. The performance of AlphaFold when reading the data from /cluster/scratch and /cluster/project is comparable to one another and around 10 times faster than when reading the data from /cluster/work. /cluster/scratch is for short-term storage and only for personal use and, therefore, it is not an optimal solution for a group of users.

Further readings

< Examples