Newsletter contents... UP

CCP4 Logo

Implementation of Data Harvesting in the CCP4 Suite

Martyn Winn

Daresbury Laboratory,
WA4 4AD, U.K.


The Data Harvesting paradigm pioneered by Kim Henrick at the European Bioinformatics Institute (EBI) has been under development for a couple of years, and will soon be in operation in users' labs. Background information can be found on the EBI-MSD web, and in two earlier Newsletter articles which give an overview, and a report on the September 1998 Joint CCP4/EBI Software Developers and Data Harvesting Workshop. Briefly, Data Harvesting means that software used in structure solution outputs to a deposition file details of the method used and results obtained, for example heavy atom sites used in phasing. By the time the user is ready to deposit the model coordinates, there should be a collection of files holding details of how the model was obtained. These files can be sent directly to the deposition centre, thereby by-passing much of the manual processing needed by AutoDep.

The EBI plan to be in a position to accept harvest files in Autumn 1999. Meanwhile, changes are being made to CCP4, MOSFLM and other common programs to produce harvest files. In this article, I will describe the relevant changes to CCP4.

Definition and application of datasets

Every deposition file should have associated in-house tags that identify the "Project Name" and "Dataset Name", where the Project Name is the working equivalent to what will become a PDB ID code (or in mmCIF terms the ) and the Dataset Name is the particular dataset within the project (either X-ray diffraction structure factors or NMR experimentally determined data) that is being used ( in mmCIF). For each program that writes out a deposition file, it is possible to specify the Project and Dataset names using the program keywords PNAME and DNAME. In principle, however, the Project and Dataset names should be considered attributes of the dataset being used, and be specified once only for that dataset. The Project and Dataset names would then be inherited from the dataset by each program in turn.

This has been implemented in CCP4 by adding information on Project and Dataset names to the header of the MTZ file. In a merged MTZ file, datasets are held as one or more data columns. In addition to the label and type attributes, each column now has an extra attribute specifying to which dataset it belongs. A list of all datasets included in the file, with the corresponding Project and Dataset names, is held separately in the MTZ header.

The code changes necessary to manipulate this information were included in CCP4 release 3.5. Ideally, dataset information should be added to the MTZ file at the beginning, e.g. in MOSFLM, but this information can be added at any time, most conveniently with the program CAD. Once the information is in the MTZ file, it can be checked by running mtzdmp which shows all the MTZ header information (go on, try it!), including the list of datasets:

 * Number of Datasets =   4
 * Dataset ID, protein name, dataset name:
        1 TOXD
        2 TOXD
        3 TOXD
        4 TOXD
and the datasets which each column corresponds to:

 * Column Labels :
 SIGFI100 FreeR_flag
 * Column Types :
 H H H F Q D Q F Q F Q F Q I

 * Associated datasets :
    1   1   1   1   1   2   2   2   2   3   3   4   4   1

In CCP4, columns to be used are selected from the MTZ file by the LABIN keyword; for example, the command


tells the program to use the 10th and 11th columns. In addition, the program now also knows that these columns are from the 3rd dataset, with Project Name TOXD and Dataset Name DERIV_MM.

Unmerged or multi-record MTZ files are treated slightly differently. In this case, a particular column may correspond to several datasets, distinguished by different batch numbers. Datasets are therefore attached to batches rather than columns, and a pointer to the relevant dataset is held in the batch header.

As an aside, classifying MTZ columns according to dataset has other uses. Previously, it was assumed that columns existed as independent entities, but this is clearly not the case, for example F(+) and F(-) columns, or F and sigmaF columns. Some programs now use dataset information to check for certain dependencies, for example the program REINDEX may need to swap F(+) and F(-) columns and therefore needs to identify which F(+) column goes with which F(-) column.

Harvesting from CCP4 programs

The current CCP4 release (3.5) thus handles datasets, but does not as yet write out deposition files. This is currently being implemented and will be included in the next release. The CCP4 programs affected are SCALA, TRUNCATE, MLPHARE, REFMAC and RESTRAIN. Provided a Project Name and a Dataset Name are specified (either explicitly or from the MTZ file) and provided the NOHARVEST keyword is not given, these programs will automatically produce a deposition file. This file will be written to

$HARVESTHOME/DepositFiles/<projectname>/ <datasetname>.<programname>

The environment variable $HARVESTHOME defaults to the user's home directory, but could be changed, for example, to a group project directory.

At the end of a project, the entire contents of the directory $HARVESTHOME/DepositFiles/<projectname> can be sent to the deposition centre for processing. Note that, because of the file-naming scheme, only the last run of a particular program with a particular dataset will be preserved, and it is the user's responsibility to ensure that this is the authoratative version. The USECWD keyword can be used to send deposit files from speculative runs to the local directory rather than the official project directory. This keyword can also be used when the program is being run on a machine without access to the directory $HARVESTHOME, in which case the user must transfer the deposition file afterwards.

In summary, the extra keywords associated with harvesting that will be included in most programs are:

Project Name. In most cases, this will be inherited from the MTZ file.
Dataset Name. In most cases, this will be inherited from the MTZ file.
Set the directory permissions to '700', i.e. read/write/execute for the user only (otherwise '755').
Write the deposit file to the current directory, rather than a subdirectory of $HARVESTHOME
Maximum width of a row in the deposit file (default 80).
Do not write out a deposit file; default is to do so provided Project and Dataset names are available.
There will inevitably have to be cooperation between members of a group working on the same project to ensure that all relevant deposition files are gathered together in the same directory, but such cooperation should occur anyway. At the time of deposition, there should be a resultant saving of time, as well increased reliability in the information submitted.

Deposition files

Deposition files are written in mmCIF format. The possible contents of an mmCIF file are described in a continually-evolving dictionary of allowed data items. Harvesting requires additional data items to those in the current standard dictionary, and an extended dictionary will be distributed by CCP4.

Example of deposition files

The distributed TOXD example dataset contains 4 datasets, all assigned to the Project Name "TOXD", and having the Dataset Names "NATIVE", "DERIV_AU", "DERIV_MM" and "DERIV_I" (see above). Running mlphare to phase the native dataset produces a file /home/mdw/DepositFiles/TOXD/NATIVE.mlphare where $HARVESTHOME has defaulted to my home directory. This file starts with information on when and how the file was created:

_audit.creation_date 1999-07-08T11:19:51+01:00
_software.classification phasing
_software.contact_author 'Z.Otwinowski or E.Dodson'
_software.contact_author_email ','
'maximum likelihood heavy atom refinement & phase calculation' mlphare
_software.version CCP4_3.5

This is followed by details such as the cell dimensions and symmetry information, and then by a summary of the results, for example the figures of merit for the phases obtained:

 9.56 15.00      61   0.484      41   0.553      20   0.343
 7.01  9.56      80   0.315      36   0.423      44   0.227
 5.54  7.01     120   0.351      45   0.502      75   0.261
 4.58  5.54     186   0.338      61   0.506     125   0.256
 3.90  4.58     255   0.327      68   0.484     187   0.270
 3.40  3.90     345   0.276      86   0.417     259   0.230
 3.01  3.40     430   0.271      90   0.446     340   0.225
 2.70  3.01     536   0.287     108   0.454     428   0.245

The deposit files should be easily readable, but they should not be altered - they represent an authentic record of the structure solution process.

Newsletter contents... UP