<Copyright statement>= (U->)
__copyright = """
PYCIFRW License Agreement (Python License, Version 2)
-----------------------------------------------------

1. This LICENSE AGREEMENT is between the Australian Nuclear Science
and Technology Organisation ("ANSTO"), and the Individual or
Organization ("Licensee") accessing and otherwise using this software
("PyCIFRW") in source or binary form and its associated documentation.

2. Subject to the terms and conditions of this License Agreement,
ANSTO hereby grants Licensee a nonexclusive, royalty-free, world-wide
license to reproduce, analyze, test, perform and/or display publicly,
prepare derivative works, distribute, and otherwise use PyCIFRW alone
or in any derivative version, provided, however, that this License
Agreement and ANSTO's notice of copyright, i.e., "Copyright (c)
2001-2014 ANSTO; All Rights Reserved" are retained in PyCIFRW alone or
in any derivative version prepared by Licensee.

3. In the event Licensee prepares a derivative work that is based on
or incorporates PyCIFRW or any part thereof, and wants to make the
derivative work available to others as provided herein, then Licensee
hereby agrees to include in any such work a brief summary of the
changes made to PyCIFRW.

4. ANSTO is making PyCIFRW available to Licensee on an "AS IS"
basis. ANSTO MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, ANSTO MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYCIFRW WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.

5. ANSTO SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYCIFRW
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A
RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYCIFRW, OR ANY
DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.

6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.

7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between ANSTO
and Licensee. This License Agreement does not grant permission to use
ANSTO trademarks or trade name in a trademark sense to endorse or
promote products or services of Licensee, or any third party.

8. By copying, installing or otherwise using PyCIFRW, Licensee agrees
to be bound by the terms and conditions of this License Agreement.

"""

Introduction

This file implements a general CIF reading/writing utility. The basic objects (CifFile/CifBlock) read and write syntactically correct CIF 1.1 files including save frames. Objects for validating CIFs are built on these basic objects: A CifDic object is derived from a CifFile created from a DDL1/2 dictionary; and the ValidCifFile/ValidCifBlock objects allow creation/checking of CIF files against a list of CIF dictionaries.

The CifFile class is initialised with either no arguments (a new CIF file) or with the name of an already existing CIF file. Data items are accessed/changed/added using the python mapping type ie to get dataitem you would type value = cf[blockname][dataitem].

Note also that a CifFile object can be accessed as a mapping type, ie using square brackets. Most mapping operations have been implemented (see below).

We build upon the objects defined in the StarFile class, by imposing a few extra restrictions where necessary.

<*>=
<Copyright statement>

import re
import StarFile
import sys

<CifBlock class>
<CifFile class>
<Define an error class>
<CIF Dictionary type>
<A valid CIF block>
<A valid CIF file>
<Top-level functions>
<Utility functions>
<Read in a CIF file>
<CifLoopBlock class>
<API documentation flags>

CifFile

A CifFile is subclassed from a StarFile. Our StarFile class has an optional check of line length, which we use.

A CifFile object is a dictionary of CifBlock objects, accessed by block name. As the maximum line length is subject to change, we allow the length to be specified, with the current default set at 2048 characters (Cif 1.1). For reading in files, we only flag a length error if the parameter strict is true, in which case we use parameter maxinlength as our maximum line length on input. Parameter maxoutlength sets the maximum line size for output. If maxoutlength is not specified, it defaults to the maximum input length.

Note that this applies to the input only. For changing output length, you can provide an optional parameter in the WriteOut method.

<CifFile class>= (<-U)
class CifFile(StarFile.StarFile):
<Initialise data structures>

When initialising, we add those parts that are unique to the CifFile as opposed to a simple collection of blocks - i.e. reading in from a file, and some line length restrictions. We do not indent this section in this noweb file, so that our comment characters output at the beginning of the line.

<Initialise data structures>= (<-U)
    def __init__(self,datasource=None,strict=1,standard='CIF',**kwargs):
        super(CifFile,self).__init__(datasource=datasource,standard=standard, **kwargs)
        self.strict = strict
        self.header_comment = \
"""
##########################################################################
#               Crystallographic Information Format file 
#               Produced by PyCifRW module
# 
#  This is a CIF file.  CIF has been adopted by the International
#  Union of Crystallography as the standard for data archiving and 
#  transmission.
#
#  For information on this file format, follow the CIF links at
#  http://www.iucr.org
##########################################################################
"""

Cif Block class

CifBlocks exist(ed) as a separate class in order to enforce non-nested loops and maximum dataname lengths. As nested loops have been removed completely from PyCIFRW, they are no longer necessary but kept here for backwards compatibility.

<CifBlock class>= (<-U)
class CifBlock(StarFile.StarBlock):
    """
    A class to hold a single block of a CIF file.  A `CifBlock` object can be treated as
    a Python dictionary, in particular, individual items can be accessed using square
    brackets e.g. `b['_a_dataname']`.  All other Python dictionary methods are also
    available (e.g. `keys()`, `values()`).  Looped datanames will return a list of values.

    ## Initialisation

    When provided, `data` should be another `CifBlock` whose contents will be copied to
    this block.

    * if `strict` is set, maximum name lengths will be enforced

    * `maxoutlength` is the maximum length for output lines

    * `wraplength` is the ideal length to make output lines

    * When set, `overwrite` allows the values of datanames to be changed (otherwise an error
    is raised).

    * `compat_mode` will allow deprecated behaviour of creating single-dataname loops using
    the syntax `a[_dataname] = [1,2,3,4]`.  This should now be done by calling `CreateLoop`
    after setting the dataitem value.
    """
    <Initialise Cif Block>
    <Adjust emulation of a mapping type>
    <Add a data item>
    <Return all looped names>

A CifBlock is a StarBlock with a very few restrictions.

<Initialise Cif Block>= (<-U)
def __init__(self,data = (), strict = 1, compat_mode=False, **kwargs):
    """When provided, `data` should be another CifBlock whose contents will be copied to
    this block.

    * if `strict` is set, maximum name lengths will be enforced

    * `maxoutlength` is the maximum length for output lines

    * `wraplength` is the ideal length to make output lines

    * When set, `overwrite` allows the values of datanames to be changed (otherwise an error
    is raised).

    * `compat_mode` will allow deprecated behaviour of creating single-dataname loops using
    the syntax `a[_dataname] = [1,2,3,4]`.  This should now be done by calling `CreateLoop`
    after setting the dataitem value.
    """
    if strict: maxnamelength=75
    else:
       maxnamelength=-1
    super(CifBlock,self).__init__(data=data,maxnamelength=maxnamelength,**kwargs)
    self.dictionary = None   #DDL dictionary referring to this block
    self.compat_mode = compat_mode   #old-style behaviour of setitem

def RemoveCifItem(self,itemname): 
    """Remove `itemname` from the CifBlock"""
    self.RemoveItem(itemname)

The second line in the copy method switches the class of the returned object to be a CifBlock. It may not be necessary.

<Adjust emulation of a mapping type>= (<-U)
def __setitem__(self,key,value):
    self.AddItem(key,value)
    # for backwards compatibility make a single-element loop
    if self.compat_mode:
        if isinstance(value,(tuple,list)) and not isinstance(value,StarFile.StarList):
             # single element loop
             self.CreateLoop([key])

def copy(self):
    newblock = super(CifBlock,self).copy()
    return self.copy.im_class(newblock)   #catch inheritance

This function was added for the dictionary validation routines. It will return a list where each member is itself a list of item names, corresponding to the names in each loop of the file.

<Return all looped names>= (<-U)
def loopnames(self):
    return [self.loops[a] for a in self.loops]
                          

Adding a data item. In the old, deprecated method we are passed a tuple with the (set) of data names at the beginning, and a (set) of values for them following.

We implement this behaviour by looping over the input datanames, and adding them to the set of keys. When we have finished, we create the loop.

We check the length of the name, and give an error if the name is greater than 75 characters, which is the CIF 1.1 maximum length.

We also check for consistency, by making sure the new item is not in the block already. If it is, we replace it (consistent with the meaning of square brackets). If it is in a loop, we replace the looped value and all other items in that loop block. This means that when adding loops, we must add them all at once if we call this routine directly.

We typecheck the data items. They can be tuples, strings or lists. If we have a list of values for a single item, the item name should also occur in a single member tuple.

<Add a data item>= (<-U)
def AddCifItem(self,data):
    """ *DEPRECATED*. Use `AddItem` instead."""
    # we accept only tuples, strings and lists!!
    if not (isinstance(data[0],(basestring,tuple,list))):
              raise TypeError, 'Cif datanames are either a string, tuple or list'
    # we catch single item loops as well...
    if isinstance(data[0],basestring):
        self.AddSingleCifItem(data[0],list(data[1]))
        if isinstance(data[1],(tuple,list)) and not isinstance(data[1],StarFile.StarList):  # a single element loop
            self.CreateLoop([data[0]])
        return
    # otherwise, we loop over the datanames
    keyvals = zip(data[0][0],[list(a) for a in data[1][0]])
    [self.AddSingleCifItem(a,b) for a,b in keyvals]
    # and create the loop
    self.CreateLoop(data[0][0])

def AddSingleCifItem(self,key,value):
    """*Deprecated*. Use `AddItem` instead"""
    """Add a single data item. If it is part of a loop, a separate call should be made"""
    self.AddItem(key,value)

Reading in a file. We use the STAR grammar parser. Note that the blocks returned will be locked for changing (overwrite=False) and can be unlocked by setting block.overwrite to True.

<Read in a CIF file>= (<-U)
def ReadCif(filename,grammar='auto',scantype='standard',scoping='instance',standard='CIF'):
    """ Read in a CIF file, returning a `CifFile` object.  

    * `filename` may be a URL, a file
    path on the local system, or any object with a `read` method.

    * `grammar` chooses the CIF grammar variant. `1.0` is the original 1992 grammar and `1.1`
    is identical except for the exclusion of square brackets as the first characters in
    undelimited datanames. `2.0` will read files in the CIF2.0 standard, and `STAR2` will
    read files according to the STAR2 publication.  If grammar is `None`, autodetection
    will be attempted in the order `2.0`, `1.1` and `1.0`. This will always succeed for 
    properly-formed CIF2.0 files.

    * `scantype` can be `standard` or `flex`.  `standard` provides pure Python parsing at the
    cost of a factor of 10 or so in speed.  `flex` will tokenise the input CIF file using
    fast C routines, but is not available for CIF2/STAR2 files.  Note that running PyCIFRW in 
    Jython uses native Java regular expressions
    to provide a speedup regardless of this argument (and does not yet support CIF2).

    * `scoping` is only relevant where nested save frames are expected (STAR2 only). 
    `instance` scoping makes nested save frames
    invisible outside their hierarchy, allowing duplicate save frame names in separate
    hierarchies. `dictionary` scoping makes all save frames within a data block visible to each 
    other, thereby restricting all save frames to have unique names.
    Currently the only recognised value for `standard` is `CIF`, which when set enforces a
    maximum length of 75 characters for datanames and has no other effect. """

    finalcif = CifFile(scoping=scoping,standard=standard)
    return StarFile.ReadStar(filename,prepared=finalcif,grammar=grammar,scantype=scantype)
    #return StarFile.StarFile(filename,maxlength,scantype=scantype,grammar=grammar,**kwargs)

Defining an error class: we simply derive a 'nothing' class from the root Python class

<Define an error class>= (<-U)
class CifError(Exception):
    def __init__(self,value):
        self.value = value
    def __str__(self):
        return '\nCif Format error: '+ self.value 

class ValidCifError(Exception):
    def __init__(self,value):
        self.value = value
    def __str__(self):
        return '\nCif Validity error: ' + self.value

Dictionaries

To avoid ambiguity with the Python dictionary type, we use capital D to denote CIF dictionaries where misinterpretation is possible.

We build our Dictionary behaviour on top of the StarFile object, which is notionally a collection of StarBlocks. A Dictionary is simply a collection of datablocks, where each datablock corresponds to a single definition. DDLm introduced nesting of definitions. DDL1 had no category definitions.

We adopt a data model whereby the excess information in a DDL2 dictionary is absorbed into special methods (and I am thinking here of the _item_type_list.construct stuff which appears at the global level), which we initialise ourselves for a DDL1 dictionary.

Note that the underscore following the data or save frame header is technically not part of the name, and so in DDL2 dictionaries there are two underscores after the word "save". DDL1 people had not figured this out yet, so we have to add it in by changing each block name. DDLm reverts to the DDL1 behaviour, perhaps as a way of emphasizing that the block name and the contents are semantically different. Or alternatively, that the leading underscore is not part of the name.

<CIF Dictionary type>= (<-U)
class CifDic(StarFile.BlockCollection):
    <Initialise Cif dictionary>
    <Dictionary determination function>
    <Deal with DDL1 differences>
    <Iron out DDL2 strangeness> 
    <Normalise DDLm save frame names>
    <Parse DDLm validity information>
    <Perform DDLm imports>
    <Create alias table>
    <Create category/object table>
    <Load categories with DDL2-type information>
    <Add type information>
    <Add category information>
    <List all items in a category>
    <Category manipulation methods>
    <Return a single packet by key>
    <Extract number and esd>
    <Analyse range>
    <Initialise dREL functions>
    <Transform drel to python>
    <Store dREL functions> 
    <Switch on numpy arrays>
    <Derive item information>
    <Convert string to appropriate type>
    <Item-level validation>
    <Loop-level validation>
    <Cross-item validation>
    <Block-level validation>
    <Run validation tests>
    <Optimisation on/off>

We want to be able to accept strings, giving the file name of the CIF dictionary, and pre-initialised CifFile objects. We do not accept CifDic objects. Our initialisation procedure first unifies the interface to the Dictionary, and then runs through the Dictionary producing a normalised form. Following this, type and category information can be collected for later reference.

Validation functions are listed so that it would be possible to add and remove them from the "valid set". This behaviour has not yet been implemented.

When loading DDLm dictionaries we may recursively call this initialisation function with a dictionary to be imported as the argument. In this case we don't want to do all the method derivation, as the necessary categories will be loaded into the calling dictionary rather than the currently initialising dictionary. So there is a keyword argument to stop the operations that should operate on the dictionary as a whole taking place.

The dREL methods require Numpy support, but we do not wish to introduce a global dependence on Numpy. Therefore, we introduce a 'switch' which will return Numpy arrays from the __getitem__ method instead of StarLists. It is intended that the dREL methods will turn this on only during execution, then turn it off afterwards.

<Initialise Cif dictionary>= (<-U)
def __init__(self,dic,do_minimum=False,grammar='1.1',**kwargs):
    self.do_minimum = do_minimum
    self.dic_as_cif = dic
    self.template_cache = {}    #for DDLm imports
    self.ddlm_functions = {}    #for DDLm functions
    self.switch_numpy(False)    #no Numpy arrays returned 
    if isinstance(dic,basestring):
        self.dic_as_cif = CifFile(dic,grammar=grammar,**kwargs)
    (self.dicname,self.diclang,self.defdata) = self.dic_determine(self.dic_as_cif)
    print '%s is a %s dictionary' % (self.dicname,self.diclang)
    super(CifDic,self).__init__(datasource=self.defdata,**kwargs) 
    self.scopes_mandatory = {"dictionary":[],"category":[],"item":[]}
    self.scopes_naughty = {"dictionary":[],"category":[],"item":[]}
    self.grammar = grammar # remember for importing dictionaries correctly
    # rename and expand out definitions using "_name" in DDL dictionaries
    if self.diclang == "DDL1":
        self.DDL1_normalise()   #this removes any non-definition entries
        self.ddl1_cat_load()
    elif self.diclang == "DDL2":
        self.DDL2_normalise()   #iron out some DDL2 tricky bits
    elif self.diclang == "DDLm":
        self.scoping = 'dictionary'   #expose all save frames
        self.ddlm_normalise()
        self.ddlm_import()      #recursively calls this routine
        self.create_alias_table()
        self.create_cat_obj_table()
        self.create_cat_key_table()
        if not self.do_minimum:
            print "Doing full dictionary initialisation" 
            self.initialise_drel()
    self.add_category_info()
    # initialise type information
    self.typedic={}
    self.primdic = {}   #typecode<->primitive type translation
    self.add_type_info()
    if self.diclang != 'DDLm':
      self.item_validation_funs = [
        self.validate_item_type,
        self.validate_item_esd,
        self.validate_item_enum,   # functions which check conformance
        self.validate_enum_range,
        self.validate_looping]
      self.loop_validation_funs = [
        self.validate_loop_membership,
        self.validate_loop_key,
        self.validate_loop_references]    # functions checking loop values
      self.global_validation_funs = [
        self.validate_exclusion,
        self.validate_parent,
        self.validate_child,
        self.validate_dependents,
        self.validate_uniqueness] # where we need to look at other values
      self.block_validation_funs = [  # where only a full block will do
        self.validate_mandatory_category]
      self.global_remove_validation_funs = [
        self.validate_remove_parent_child] # removal is quicker with special checks
    elif self.diclang == 'DDLm':
        self.item_validation_funs = []
        self.loop_validation_funs = []
        self.global_validation_funs = []
        self.block_validation_funs = []
        self.global_remove_validation_funs = []
    self.optimize = False        # default value
    self.done_parents = []
    self.done_children = []
    self.done_keys = []
    # debug
    # j = open("dic_debug","w")
    # j.write(self.__str__())
    # j.close()

Full initialisation. This can take some time so we optionally skip it, but can call this function separately at a later stage if needed.

<Initialise dREL functions>= (<-U)
def initialise_drel(self):
    """Parse drel functions and prepare data structures in dictionary"""
    self.ddlm_parse_valid() #extract validity information from data block
    self.transform_drel()   #parse the drel functions
    self.add_drel_funcs()   #put the drel functions into the namespace

This function determines whether we have a DDLm, DDL2 or DDL1 dictionary. We are passed a CifFile object. The current method looks for an on_this_dictionary block, which implies DDL1, or a single block, which implies DDL2/DDLM. This is also where we define some universal keys for uniform access to DDL attributes.

<Dictionary determination function>= (<-U)
def dic_determine(self,cifdic):
    if cifdic.has_key("on_this_dictionary"): 
        self.master_key = "on_this_dictionary"
        self.type_spec = "_type"
        self.enum_spec = "_enumeration"
        self.cat_spec = "_category"
        self.esd_spec = "_type_conditions"
        self.must_loop_spec = "_list"
        self.must_exist_spec = "_list_mandatory"
        self.list_ref_spec = "_list_reference"
        self.unique_spec = "_list_uniqueness"
        self.child_spec = "_list_link_child"
        self.parent_spec = "_list_link_parent"
        self.related_func = "_related_function"
        self.related_item = "_related_item"
        self.primitive_type = "_type"
        self.dep_spec = "xxx"
        self.cat_list = []   #to save searching all the time
        name = cifdic["on_this_dictionary"]["_dictionary_name"]
        version = cifdic["on_this_dictionary"]["_dictionary_version"]
        return (name+version,"DDL1",cifdic)
    elif len(cifdic.get_roots()) == 1:              # DDL2/DDLm
        self.master_key = cifdic.get_roots()[0][0]      
        # now change to dictionary scoping
        cifdic.scoping = 'dictionary'
        name = cifdic[self.master_key]["_dictionary.title"]
        version = cifdic[self.master_key]["_dictionary.version"]
        if name != self.master_key:
            print "Warning: DDL2 blockname %s not equal to dictionary name %s" % (self.master_key,name)
        if cifdic[self.master_key].has_key("_dictionary.class"):   #DDLm
            self.unique_spec = "_category_key.generic"
            return(name+version,"DDLm",cifdic) 
        #otherwise DDL2
        self.type_spec = "_item_type.code" 
        self.enum_spec = "_item_enumeration.value"
        self.esd_spec = "_item_type_conditions.code"
        self.cat_spec = "_item.category_id" 
        self.loop_spec = "there_is_no_loop_spec!"
        self.must_loop_spec = "xxx"
        self.must_exist_spec = "_item.mandatory_code"
        self.child_spec = "_item_linked.child_name"
        self.parent_spec = "_item_linked.parent_name"
        self.related_func = "_item_related.function_code"
        self.related_item = "_item_related.related_name"
        self.unique_spec = "_category_key.name"
        self.list_ref_spec = "xxx"
        self.primitive_type = "_type"
        self.dep_spec = "_item_dependent.dependent_name"
        return (name+version,"DDL2",cifdic)
    else:
        raise CifError, "Unable to determine dictionary DDL version"
    

DDL1 differences. Firstly, in DDL1 you can loop a _name to get definitions of related names (e.g. x,y,z). Secondly, the data block name is missing the initial underscore, so we need to read the _name value. There is one block without a _name attribute, which we proceed to destroy (exercise for the reader: which one?).

A further complex difference is in the way that ranges are specified. A DDL2 dictionary generally loops the _item_range.maximum/minimum items, in order to specify inclusion of the endpoints of the range, whereas DDL1 dictionaries simply specify ranges as n:m. We translate these values into item_range specifications.

If the _list item is missing for a dictionary definition, it defaults to no, i.e. the item cannot be listed. We explicitly include this in our transformations.

The dictionaries also contain categories, which are used to impose constraints on groupings of items in lists. Category names in DDL2 dictionaries have no leading underscore, and the constraints are stored directly in the category definition. So, with a DDL1 dictionary, we rewrite things to match the DDL2 methods. In particular, the list_uniqueness item becomes the category_key.name attribute of the category. This may apply to _list_mandatory and /or _list_reference to, but the current specification is vague.

Also, it is possible for cross-item references (e.g. in a _list_reference) to include a whole range of items by terminating the name with an underscore. It is then understood to include anything starting with those characters. We explicitly try to expand these references out.

Note the way we convert to DDL2-style type definitions; any definition having a _type_construct regular expression triggers the definition of a whole new type, which is stored as per DDL2, for the later type dictionary construction process to find.

<Deal with DDL1 differences>= (<-U)
def DDL1_normalise(self):
    # switch off block name collision checks
    self.standard = None
    # add default type information in DDL2 style
    # initial types and constructs
    base_types = ["char","numb","null"]
    prim_types = base_types[:] 
    base_constructs = [".*",
        '(-?(([0-9]*[.][0-9]+)|([0-9]+)[.]?)([(][0-9]+[)])?([eEdD][+-]?[0-9]+)?)|\?|\.',
        "\"\" "]
    for key,value in self.items():
       newnames = [key]  #keep by default
       if value.has_key("_name"):
           real_name = value["_name"]
           if isinstance(real_name,list):        #looped values
               for looped_name in real_name:
                  new_value = value.copy()
                  new_value["_name"] = looped_name  #only looped name
                  self[looped_name] = new_value
               newnames = real_name
           else: 
                  self[real_name] = value
                  newnames = [real_name]
       # delete the old one
       if key not in newnames:
          del self[key]
    # loop again to normalise the contents of each definition
    for key,value in self.items():
       #unlock the block
       save_overwrite = value.overwrite
       value.overwrite = True
       # deal with a missing _list, _type_conditions
       if not value.has_key("_list"): value["_list"] = 'no'
       if not value.has_key("_type_conditions"): value["_type_conditions"] = 'none'
       # deal with enumeration ranges
       if value.has_key("_enumeration_range"):
           max,min = self.getmaxmin(value["_enumeration_range"])
           if min == ".":
               self[key].AddLoopItem((("_item_range.maximum","_item_range.minimum"),((max,max),(max,min))))
           elif max == ".":
               self[key].AddLoopItem((("_item_range.maximum","_item_range.minimum"),((max,min),(min,min))))
           else:
               self[key].AddLoopItem((("_item_range.maximum","_item_range.minimum"),((max,max,min),(max,min,min))))
       #add any type construct information
       if value.has_key("_type_construct"):
           base_types.append(value["_name"]+"_type")   #ie dataname_type
           base_constructs.append(value["_type_construct"]+"$")
           prim_types.append(value["_type"])     #keep a record
           value["_type"] = base_types[-1]   #the new type name
           
    #make categories conform with ddl2
    #note that we must remove everything from the last underscore
       if value.get("_category",None) == "category_overview":
            last_under = value["_name"].rindex("_")
            catid = value["_name"][1:last_under]
            value["_category.id"] = catid  #remove square bracks
            if catid not in self.cat_list: self.cat_list.append(catid)
       value.overwrite = save_overwrite 
    # we now add any missing categories before filling in the rest of the
    # information
    for key,value in self.items():
        #print 'processing ddl1 definition %s' % key
        if self[key].has_key("_category"):
            if self[key]["_category"] not in self.cat_list:
                # rogue category, add it in
                newcat = self[key]["_category"]
                fake_name = "_" + newcat + "_[]" 
                newcatdata = CifBlock()
                newcatdata["_category"] = "category_overview"
                newcatdata["_category.id"] = newcat
                newcatdata["_type"] = "null"
                self[fake_name] = newcatdata
                self.cat_list.append(newcat)
    # write out the type information in DDL2 style
    self.dic_as_cif[self.master_key].AddLoopItem((
        ("_item_type_list.code","_item_type_list.construct",
          "_item_type_list.primitive_code"),
        (base_types,base_constructs,prim_types)
        ))
 

DDL2 has a few idiosyncracies of its own. For some reason, in the definition of a parent item, all the child items are listed and their mandatory/not mandatory status specified. This duplicates information under the child item itself, although there is something on the web indicating that this is purely cosmetic and not strictly necessary. For our purposes, we want to extract the mandatory/not mandatory nature of the current item, which appears to be conventionally at the top of the list (we don't assume this below). The only way of determining what the actual item name is is to look at the save frame name, which is a bit of a fragile tactic - especially as dictionary merge operations are supposed to look for _item.name.

So, in these cases, we have to assume the save frame name is the one we want, and find this entry in the list.

Additionally, the child entry doesn't contain the category specification, so we add this into the child entry at the same time, together with a pointer to the parent item.

Such entries then have a loop listing parents and children down the whole hierarchy, starting with the current item. We disentangle this, placing parent item attributes in the child items, moving sub-children down to their level. Sub children may not exist at all, so we create them if necessary.

To make life more interesting, the PDBX have an entry_pc placeholder in which additional (and sometimes repeated) parent-child relationships can be expressed. We cannot assume that any given parent-child relationship is stated at a single site in the file. What is more, it appears that multiple parents for a single child are defined in the _entry.pdbx_pc entry. Our changes to the file pre-checking are therefore restricted to making sure that the child contains information about the parents; we do not interfere with the parent's information about the children, even if we consider that to be superfluous. Note that we will have to add parent-child validity checks to check consistency among all these relationships.

Update: in the DDL-2.1.6 file, only the parents/children are looped, rather than the item names, so we have to check looping separately.

Next: DDL2 contains aliases to DDL1 item names, so in theory we should be able to use a DDL2 dictionary to validate a DDL1-style CIF file. We create separate definition blocks for each alias to enable this.

Also, we flatten out any single-element lists for item_name. This is simply to avoid the value of e.g. category_id being a single-element list instead of a string.

Note also that _item.category_id in DDL2 is 'implicit', meaning in this case that you can determine it from the item name. We add in the category for simplicity.

<Iron out DDL2 strangeness>= (<-U)
<Loopify parent-child relationships>

def DDL2_normalise(self):
   listed_defs = filter(lambda a:isinstance(self[a].get('_item.name'),list),self.keys()) 
   # now filter out all the single element lists!
   dodgy_defs = filter(lambda a:len(self[a]['_item.name']) > 1, listed_defs)
   for item_def in dodgy_defs:
      <Repopulate child definitions>
   <Populate parent and child links correctly>
   # now flatten any single element lists
   single_defs = filter(lambda a:len(self[a]['_item.name'])==1,listed_defs)
   for flat_def in single_defs:
       flat_keys = self[flat_def].GetLoop('_item.name').keys()
       for flat_key in flat_keys: self[flat_def][flat_key] = self[flat_def][flat_key][0]
   # now deal with the multiple lists
   # next we do aliases
   all_aliases = filter(lambda a:self[a].has_key('_item_aliases.alias_name'),self.keys()) 
   for aliased in all_aliases:
      my_aliases = listify(self[aliased]['_item_aliases.alias_name'])
      for alias in my_aliases:
          self[alias] = self[aliased].copy()   #we are going to delete stuff...
          del self[alias]["_item_aliases.alias_name"]

As some DDL2 dictionaries neglect children, we repopulate the skeleton or non-existent definitions that may be provided in the dictionary.

<Repopulate child definitions>= (<-U)
      # print "DDL2 norm: processing %s" % item_def
      thisdef = self[item_def]
      packet_no = thisdef['_item.name'].index(item_def)
      realcat = thisdef['_item.category_id'][packet_no] 
      realmand = thisdef['_item.mandatory_code'][packet_no]
      # first add in all the missing categories
      # we don't replace the entry in the list corresponding to the
      # current item, as that would wipe out the information we want
      for child_no in range(len(thisdef['_item.name'])):
          if child_no == packet_no: continue
          child_name = thisdef['_item.name'][child_no]
          child_cat = thisdef['_item.category_id'][child_no]
          child_mand = thisdef['_item.mandatory_code'][child_no]
          if not self.has_key(child_name):
              self[child_name] = CifBlock()
              self[child_name]['_item.name'] = child_name
          self[child_name]['_item.category_id'] = child_cat
          self[child_name]['_item.mandatory_code'] = child_mand
      self[item_def]['_item.name'] = item_def
      self[item_def]['_item.category_id'] = realcat
      self[item_def]['_item.mandatory_code'] = realmand

Populating parent and child links. The DDL2 model uses parent-child relationships to create relational database behaviour. This means that the emphasis is on simply linking two ids together directionally. This link is not necessarily inside a definition that is being linked, but we require that any parents and children are identified within the definition that they relate to. This means we have to sometimes relocate and expand links. As an item can simultaneously be both a parent and a child, we need to explicitly fill in the links even within a single definition.

<Populate parent and child links correctly>= (<-U)
target_defs = filter(lambda a:self[a].has_key('_item_linked.child_name') or \
                              self[a].has_key('_item_linked.parent_name'),self.keys())
# now dodgy_defs contains all definition blocks with more than one child/parent link
for item_def in dodgy_defs: self.create_pcloop(item_def)           #regularise appearance
for item_def in dodgy_defs:
      print 'Processing %s' % item_def
      thisdef = self[item_def]
      child_list = thisdef['_item_linked.child_name']
      parents = thisdef['_item_linked.parent_name']
      # for each parent, find the list of children.
      family = zip(parents,child_list)
      notmychildren = family         #We aim to remove non-children
      # Loop over the parents, relocating as necessary
      while len(notmychildren):
         # get all children of first entry
         mychildren = filter(lambda a:a[0]==notmychildren[0][0],family)
         print "Parent %s: %d children" % (notmychildren[0][0],len(mychildren))
         for parent,child in mychildren:   #parent is the same for all
                  # Make sure that we simply add in the new entry for the child, not replace it,
                  # otherwise we might spoil the child entry loop structure
                  try:
                      childloop = self[child].GetLoop('_item_linked.parent_name')
                  except KeyError:
                      print 'Creating new parent entry %s for definition %s' % (parent,child)
                      self[child]['_item_linked.parent_name'] = [parent]
                      childloop = self[child].GetLoop('_item_linked.parent_name')
                      childloop.AddLoopItem(('_item_linked.child_name',[child]))
                      continue
                  else:
                      # A parent loop already exists and so will a child loop due to the
                      # call to create_pcloop above
                      pars = [a for a in childloop if getattr(a,'_item_linked.child_name','')==child]
                      goodpars = [a for a in pars if getattr(a,'_item_linked.parent_name','')==parent]
                      if len(goodpars)>0:   #no need to add it
                          print 'Skipping duplicated parent - child entry in %s: %s - %s' % (child,parent,child)
                          continue
                      print 'Adding %s to %s entry' % (parent,child)
                      newpacket = childloop.GetPacket(0)   #essentially a copy, I hope
                      setattr(newpacket,'_item_linked.child_name',child)
                      setattr(newpacket,'_item_linked.parent_name',parent)
                      childloop.AddPacket(newpacket)
         #
         # Make sure the parent also points to the children.  We get
         # the current entry, then add our 
         # new values if they are not there already
         # 
         parent_name = mychildren[0][0]
         old_children = self[parent_name].get('_item_linked.child_name',[])
         old_parents = self[parent_name].get('_item_linked.parent_name',[])
         oldfamily = zip(old_parents,old_children)
         newfamily = []
         print 'Old parents -> %s' % `old_parents`
         for jj, childname in mychildren:
             alreadythere = filter(lambda a:a[0]==parent_name and a[1] ==childname,oldfamily)
             if len(alreadythere)>0: continue
             'Adding new child %s to parent definition at %s' % (childname,parent_name)
             old_children.append(childname)
             old_parents.append(parent_name)
         # Now output the loop, blowing away previous definitions.  If there is something
         # else in this category, we are destroying it.
         newloop = CifLoopBlock(dimension=1)
         newloop.AddLoopItem(('_item_linked.parent_name',old_parents))
         newloop.AddLoopItem(('_item_linked.child_name',old_children))
         del self[parent_name]['_item_linked.parent_name']
         del self[parent_name]['_item_linked.child_name']
         self[parent_name].insert_loop(newloop)
         print 'New parents -> %s' % `self[parent_name]['_item_linked.parent_name']`
         # now make a new,smaller list
         notmychildren = filter(lambda a:a[0]!=mychildren[0][0],notmychildren)

In order to handle parent-child relationships in a regular way, we want to assume that all parent-child entries occur in a loop, with both members present. This routine does that for us. If the parent is missing, it is assumed to be the currently-defined item. If the child is missing, likewise.

<Loopify parent-child relationships>= (<-U)
def create_pcloop(self,definition):
    old_children = self[definition].get('_item_linked.child_name',[])
    old_parents = self[definition].get('_item_linked.parent_name',[])
    if isinstance(old_children,basestring): 
         old_children = [old_children]
    if isinstance(old_parents,basestring): 
         old_parents = [old_parents]
    if (len(old_children)==0 and len(old_parents)==0) or \
       (len(old_children) > 1 and len(old_parents)>1):
         return
    if len(old_children)==0:
         old_children = [definition]*len(old_parents)
    if len(old_parents)==0:
         old_parents = [definition]*len(old_children)
    newloop = CifLoopBlock(dimension=1)
    newloop.AddLoopItem(('_item_linked.parent_name',old_parents)) 
    newloop.AddLoopItem(('_item_linked.child_name',old_children)) 
    try:
        del self[definition]['_item_linked.parent_name']
        del self[definition]['_item_linked.child_name']
    except KeyError:
        pass
    self[definition].insert_loop(newloop)
        
    

Loading the DDL1 categories with DDL2-type information. DDL2 people wisely put category-wide information in the category definition rather than spreading it out between category items. We collect this information together here.

This routine is the big time-waster in initialising a DDL1 dictionary, so we have attempted to optimize it by locally defining functions, instead of using lambdas, and making one loop through the dictionary instead of hundreds.

<Load categories with DDL2-type information>= (<-U)
def ddl1_cat_load(self):
    deflist = self.keys()       #slight optimization
    cat_mand_dic = {}
    cat_unique_dic = {}
    # a function to extract any necessary information from each definition
    def get_cat_info(single_def):
        if self[single_def].get(self.must_exist_spec)=='yes':
            thiscat = self[single_def]["_category"]
            curval = cat_mand_dic.get(thiscat,[])
            curval.append(single_def)
            cat_mand_dic[thiscat] = curval
        # now the unique items...
        # cif_core.dic throws us a curly one: the value of list_uniqueness is
        # not the same as the defined item for publ_body_label, so we have
        # to collect both together.  We assume a non-listed entry, which
        # is true for all current (May 2005) ddl1 dictionaries.
        if self[single_def].get(self.unique_spec,None)!=None:
            thiscat = self[single_def]["_category"]
            new_unique = self[single_def][self.unique_spec]
            uis = cat_unique_dic.get(thiscat,[])
            if single_def not in uis: uis.append(single_def)
            if new_unique not in uis: uis.append(new_unique)
            cat_unique_dic[thiscat] = uis
        
    map(get_cat_info,deflist)       # apply the above function
    for cat in cat_mand_dic.keys():
        cat_entry = self.get_ddl1_entry(cat)
        self[cat_entry]["_category_mandatory.name"] = cat_mand_dic[cat]
    for cat in cat_unique_dic.keys():
        cat_entry = self.get_ddl1_entry(cat)
        self[cat_entry]["_category_key.name"] = cat_unique_dic[cat]

# A helper function get find the entry corresponding to a given category name:
# yes, in DDL1 the actual name is different in the category block due to the
# addition of square brackets which may or may not contain stuff.

def get_ddl1_entry(self,cat_name):
    chop_len = len(cat_name) 
    possibles = filter(lambda a:a[1:chop_len+3]==cat_name+"_[",self.keys())
    if len(possibles) > 1 or possibles == []:
        raise ValidCifError, "Category name %s can't be matched to category entry" % cat_name
    else:
        return possibles[0]

Normalise DDLm datanames. While there is no strict requirement that the save frame name corresponds to the name of the data item defined therein, we make it equivalent so that all of our access methods work nicely. Pending a detailed description of how variable names used in dREL methods are derived, we go the harder route and concatenate the category name with the object id (although it may be possible to use the _definition.id). Note that we do not alter category definitions, and we operate through the standard __setitem__ method of the block collection class in order to properly manage the upper/lower case and parent-child housekeeping.

<Normalise DDLm save frame names>= (<-U)
def ddlm_normalise(self):
    for key,value in self.items():
       if value.has_key("_definition.id"):
           real_name = value["_definition.id"]
           if real_name.lower() != key.lower():
              self.rename(key,real_name)
    

A dataname can appear in a file under a different name if it has been aliased. We create an alias table to speed up lookup. The table is indexed by true name, with a list of alternatives.

<Create alias table>= (<-U)
def create_alias_table(self):
    """Populate an alias table that we can look up when searching for a dataname"""
    all_aliases = [a for a in self.keys() if self[a].has_key('_alias.definition_id')]
    self.alias_table = dict([[a,self[a]['_alias.definition_id']] for a in all_aliases])

DDLm internally refers to data items by the category.object notation, with the twist that child categories of loops can have their objects appear in the parent category. So this table prepares a complete list of (cat,obj):dataname correspondences, as the implementation of parent-child requires looking up a table each time searching for children.

The base_table creation line at the beginning will include the datablock within which all the save frames appear; there may be a better alternative e.g. removing access to this block altogether.

The recursive expand_base_table function returns a dictionary of (name,definition_id) pairs indexing the corresponding datanames. We must catch any keys and exclude them from this process, as they are allowed to have the same object_id as their parent key in the enclosing datablock and will overwrite the entry for the parent key if left in. We also note that the example dictionary allows these types of name collisions if an item is intended to be identical (e.g. _atom_site_aniso.type_symbol and atom_site.type_symbol), so we create a short list of possible alternative names for each (cat,obj) pair.

The create_nested_key_table stores information about which keys index child categories. This way applications can search for any loops containing these keys and expand packets for dREL accordingly.

<Create category/object table>= (<-U)
def create_cat_obj_table(self):
    """Populate a table indexed by (cat,obj) and returning the correct dataname"""
    base_table = dict([((self[a].get('_name.category_id','').lower(),self[a].get('_name.object_id','').lower()),[self[a].get('_definition.id','')]) \
                       for a in self.keys() if self[a].get('_definition.scope','Item')=='Item'])
    self.loopable_cats = [a.lower() for a in self.keys() if self[a].get('_definition.class','')=='Loop']
    loopers = [self.get_immediate_children(a) for a in self.loopable_cats]
    loop_children = [[b[0] for b in a if b[0].lower() in self.loopable_cats ] for a in loopers]
    expand_list = dict([(a,b) for a,b in zip(self.loopable_cats,loop_children) if len(b)>0])
    print "Expansion list:" + `expand_list`
    extra_table = {}   #for debugging we keep it separate from base_table until the end
    def expand_base_table(parent_cat,child_cats):
        extra_names = []
        # first deal with all the child categories
        for child_cat in child_cats:
          nn = []
          if expand_list.has_key(child_cat):  # a nested category: grab its names
            nn = expand_base_table(child_cat,expand_list[child_cat])
            # store child names
            extra_names += nn
          # add all child names to the table
          child_names = [(self[n]['_name.object_id'].lower(),self[n]['_definition.id']) for n in self.names_in_cat(child_cat) if self[n].get('_definition.scope','Item')=='Item' and \
                             self[n].get('_type.purpose','') != 'Key']
          child_names += extra_names
          extra_table.update(dict([((parent_cat,obj),[name]) for obj,name in child_names if (parent_cat,name) not in extra_table]))
        # and the repeated ones get appended instead
        repeats = [a for a in child_names if a in extra_table]
        for obj,name in repeats:
            extra_table[(parent_cat,obj)] += [name] 
        # and finally, add our own names to the return list
        child_names += [(self[n]['_name.object_id'].lower(),self[n]['_definition.id']) for n in self.names_in_cat(parent_cat) if self[n].get('_definition.scope','Item')=='Item' and \
                            self[n].get('_type.purpose','')!='Key']
        return child_names
    [expand_base_table(parent,child) for parent,child in expand_list.items()]
    print 'Expansion cat/obj values: ' + `extra_table`
    # append repeated ones
    non_repeats = dict([a for a in extra_table.items() if a[0] not in base_table])
    repeats = [a for a in extra_table.keys() if a in base_table]
    base_table.update(non_repeats)
    for k in repeats:
        base_table[k] += extra_table[k]
    self.cat_obj_lookup_table = base_table
    self.loop_expand_list = expand_list
    
def create_cat_key_table(self):
    """Create a utility table with a list of keys applicable to each category"""
    self.cat_key_table = dict([(c,[self[c]["_category.key_id"]]) for c in self.loopable_cats])
    def collect_keys(parent_cat,child_cats):
            kk = []
            for child_cat in child_cats:
                if self.loop_expand_list.has_key(child_cat):
                    kk += collect_keys(child_cat)
                # add these keys to our list
                kk += [self[child_cat]['_category.key_id']]
            self.cat_key_table[parent_cat] = self.cat_key_table[parent_cat] + kk
            return kk
    for k,v in self.loop_expand_list.items():
        collect_keys(k,v)
    print 'Keys for categories' + `self.cat_key_table`

DDLm introduces validity information in the enclosing datablock. It is a loop of scope, attribute values where the scope is one of dictionary (everywhere), category (whole category) and item (just the single definition). Validity can be + (mandatory), . encouraged and ! not allowed. It only appears in the DDLm attributes dictionary, so this information is blank unless we are dealing with the DDLm dictionary.

<Parse DDLm validity information>= (<-U)
def ddlm_parse_valid(self):
    if not self.dic_as_cif[self.master_key].has_key("_dictionary_valid.scope"):
        return
    for scope_pack in self.dic_as_cif[self.master_key].GetLoop("_dictionary_valid.scope"):
        scope = getattr(scope_pack,"_dictionary_valid.scope")
        valid_info = getattr(scope_pack,"_dictionary_valid.attributes")
        valid_info = valid_info.split()
        for i in range(0,len(valid_info),2): 
            if valid_info[i]=="+":
               self.scopes_mandatory[scope.lower()].append(valid_info[i+1].lower())
            elif valid_info[i]=="!":
               self.scopes_naughty[scope.lower()].append(valid_info[i+1].lower())

These methods were added when developing interactive editing tools, which allow shifting categories around.

<Category manipulation methods>= (<-U)
<Changing and updating categories>
<Getting category information>

Changing a category name involves changing the _name.category_id in all children as well as the category definition itself and datablock names, then updating our internal structures.

<Changing and updating categories>= (<-U)
def change_category_name(self,oldname,newname):
    self.unlock()
    """Change the category name from [[oldname]] to [[newname]]"""
    if not self.has_key(oldname):
        raise KeyError,'Cannot rename non-existent category %s to %s' % (oldname,newname)
    if self.has_key(newname):
        raise KeyError,'Cannot rename %s to %s as %s already exists' % (oldname,newname,oldname)
    self.rename(oldname,newname)   #NB no name integrity checks
    self[newname]['_name.object_id']=newname
    self[newname]['_definition.id']=newname
    child_defs = [a[0] for a in self.get_immediate_children(newname)]
    for child_def in child_defs:
        self[child_def]['_name.category_id'] = newname
        if self[child_def].get('_definition.scope','Item')=='Item':
            newid = self.create_catobj_name(newname,self[child_def]['_name.object_id'])
            self[child_def]['_definition.id']=newid
            self.rename(child_def,newid[1:])  #no underscore at the beginning
    # update categories
    print `self.cat_map.values()`
    oldid = [a[0] for a in self.cat_map.items() if a[1].lower()==oldname.lower()]
    if len(oldid)!=1:
        raise CifError, 'Unable to find old category name in category map: %s not in %s' % (oldname.lower(),`self.cat_map.items()`)
    del self.cat_map[oldid[0]]
    self.cat_map[newname.lower()] = newname
    self.lock()

def create_catobj_name(self,cat,obj):
    """Combine category and object in approved fashion to create id"""
    return ('_'+cat+'.'+obj)

def change_category(self,itemname,catname):
    """Move itemname into catname"""
    defid = self[itemname]
    if defid['_name.category_id'].lower()==catname.lower():
        print 'Already in category, no change'
        return
    if catname not in self:    #don't have it
        print 'No such category %s' % catname
        return
    self.unlock()
    objid = defid['_name.object_id']
    defid['_name.category_id'] = catname
    newid = itemname # stays the same for categories
    if defid.get('_definition.scope','Item') == 'Item':
        newid = self.create_catobj_name(catname,objid)
        defid['_definition.id']= newid
        self.rename(itemname,newid)
    self.set_parent(catname,newid)  
    self.lock()

def change_name(self,one_def,newobj):
    """Change the object_id of one_def to newobj"""
    newid = self.create_catobj_name(self[one_def]['_name.category_id'],newobj)
    self.unlock()
    self.rename(one_def,newid)
    self[newid]['_definition.id']=newid
    self[newid]['_name.object_id']=newobj
    self.lock()
    return newid
   
def add_category(self,catname,catparent=None):
    """Add a new category to the dictionary with name [[catname]].
       If [[catparent]] is None, the category will be a child of
       the topmost 'Head' category or else the top data block."""
    if catname in self:
        raise CifError, 'Attempt to add existing category %s' % catname
    self.unlock()
    if catparent is None:
        root_cat = [a for a in self.keys() if self[a].get('_definition.class',None)=='Head']
        if len(root_cat)>0:
            root_cat = root_cat[0]
        else:
            root_cat = self.get_roots()[0]
    else:
        root_cat = catparent
    realname = self.NewBlock(catname,parent=root_cat)
    self[realname]['_name.object_id'] = realname
    self[realname]['_name.category_id'] = self[root_cat]['_name.object_id']
    self[realname]['_definition.id'] = realname
    self[realname]['_definition.scope'] = 'Category'
    self[realname]['_definition.class'] = 'Loop'
    self[realname]['_description.text'] = 'No definition provided'
    self.lock()
    self.cat_map[realname]=realname
    return realname

def add_definition(self,itemname,catparent,def_text='PLEASE DEFINE ME'):
    """Add itemname to category [[catparent]]. If itemname contains periods,
    all text before the final period is ignored."""
    self.unlock()
    if '.' in itemname:
        objname = itemname[:-1].split('.')[-1]
    else:
        objname = itemname
    objname = objname.strip('_')
    if catparent not in self or self[catparent]['_definition.scope']!='Category':
        raise CifError, 'No category %s in dictionary' % catparent
    fullname = '_'+catparent.lower()+'.'+objname
    print 'New name: %s' % fullname
    realname = self.NewBlock(fullname, fix=False, parent=catparent)
    self[realname]['_definition.id']=fullname
    self[realname]['_name.object_id']=objname
    self[realname]['_name.category_id']=catparent
    self[realname]['_definition.class']='Datum'
    self[realname]['_description.text']=def_text
    return realname
    
def remove_definition(self,defname):
    """Remove a definition from the dictionary. If a category, we have to
    remove the links in cat_map"""
    if defname not in self:
        return
    if self[defname].get('_definition.scope')=='Category':
        child_cats = [a['_name.category_id'] for a in self.get_children(defname) if a.get('_definition.scope')=='Category']
        for cc in child_cats: del self.cat_map[cc]
    del self[defname]

The DDLm architecture identifies a data definition by (category,object) which identifies a unique textual dataname appearing in the data file. Because of category joins when nested categories are looped, a single dataname may be referred to by several different category identifiers. The get_name_by_cat_obj routine will search all loop categories within the given category's hierarchy until it finds the appropriate one.

If give_default is True, the default construction '_catid.objid' is returned if nothin is found in the dictionary. This should only be used during testing as the lack of a corresponding definition in the dictionary means that it is unlikely that anything sensible will result.

<Getting category information>= (<-U)
def get_cat_obj(self,name):
    """Return (cat,obj) tuple. [[name]] must contain only a single period"""
    cat,obj = name.split('.')
    return (cat.strip('_'),obj)
    
def get_name_by_cat_obj(self,category,object,give_default=False):
    """Return the dataname corresponding to the given category and object"""
    if category[0] == '_':    #accidentally left in
       true_cat = category[1:].lower()
    else:
       true_cat = category.lower()
    try:
        return self.cat_obj_lookup_table[(true_cat,object.lower())][0]
    except KeyError:
        if give_default:
           return '_'+true_cat+'.'+object
    raise KeyError, 'No such category,object in the dictionary: %s %s' % (true_cat,object)

Dictionaries have the category-wide information in the category definition area. We create a map from data block / save frame header to category id. For DDL1 the block name has square brackets appended. For DDL2 the category name is kept in the _category.id attribute. For DDLm, we need to find all entries with a _definition.scope of "Category".

<Add category information>= (<-U)
def add_category_info(self):
    if self.diclang == "DDLm":
        <Get list of DDLm categories>
    else:
        <Get list of DDL1/2 categories>
    # match ids and entries in the dictionary
    catpairs = map(None,category_ids,categories)
    self.cat_map = {}
    for catid,cat in catpairs:self.cat_map[catid] = cat

<Get list of DDL1/2 categories>= (<-U)
categories = filter(lambda a:self[a].has_key("_category.id"),self.keys())
# get the category id
category_ids = map(lambda a:self[a]["_category.id"],categories)

Note that as of Oct 2007 we don't know if _definition.scope is mandatory, so we use a get instead of a straight square bracket.

<Get list of DDLm categories>= (<-U)
categories = [a for a in self.keys() if self[a].get("_definition.scope","Item")=="Category"]
category_ids = [self[a]["_definition.id"] for a in categories]


This method was added to facilitate running dREL scripts, which treat certain variables as having attributes which all belong to a single category. We return only the extension in keeping with dREL syntax. If names_only is true, we return only the object part of the dataname.

<List all items in a category>= (<-U)
def names_in_cat(self,cat,names_only=False):
    nameblocks = filter(lambda a:self[a].get("_name.category_id","").lower()
                         ==cat.lower(),self.keys())
    if not names_only:
        return ["_" + self[a]["_name.category_id"]+"." + self[a]["_name.object_id"] for a in nameblocks if self[a].get('_definition.scope','Item')=='Item']
    else:
        return map(lambda a:self[a]["_name.object_id"],nameblocks)
    

This method was added for DDLm support. We are passed a category and a value, and must find a packet which has a matching key. We use the keyname as a way of finding the loop.

<Return a single packet by key>= (<-U)
def get_key_pack(self,category,value,data):
    keyname = self[category][self.unique_spec]
    onepack = data.GetPackKey(keyname,value)
    return onepack
 

SectionDDLm functionality

DDLm is a far more complex dictionary standard than DDL2. We are able to import definitions, for example. The top-level dictionary notionally contains a very minimal set of save frame definitions. We start at this top level and work down, resolving import references recursively. This will satisfy the requirement that conflicts are resolved from the most nested level upwards.

We use the built-in PyCIFRW merging methods.

<Perform DDLm imports>= (<-U)
<Master import routine>

<Master import routine>= (<-U)
def ddlm_import(self):
    import urllib
    import_frames = [(a,self[a]['_import.get']) for a in self.keys() if self[a].has_key('_import.get')]
    #resolve all references
    for parent_block,import_list in import_frames:
      for import_ref in import_list:
        file_loc = import_ref["file"]
        full_uri = self.resolve_path(file_loc)
        if full_uri not in self.template_cache:
            dic_as_cif = CifFile(urllib.urlopen(full_uri),grammar=self.grammar)
            self.template_cache[full_uri] = CifDic(dic_as_cif,do_minimum=True)  #this will recurse internal imports
            print 'Added %s to cached dictionaries' % full_uri
        import_from = self.template_cache[full_uri]
        dupl = import_ref.get('dupl','Exit') 
        miss = import_ref.get('miss','Exit')
        target_key = import_ref["save"]
        try:
            import_target = import_from[target_key]
        except KeyError:
            if miss == 'Exit':
               raise CifError,'Import frame %s not found in %s' % (target_key,full_uri)
            else: continue
        # now import appropriately
        mode = import_ref.get("mode",'Contents').lower()
        if self.has_key(target_key) and mode=='Full':  #so blockname will be duplicated
            if dupl == 'Exit':
                raise CifError, 'Import frame %s already in dictionary' % target_key
            elif dupl == 'Ignore':
                continue
        child_frames = import_from.get_children(target_key,include_parent=(mode=='full'))
        if mode == 'contents':   #merge self and children only
            # merge block contents
            self[parent_block].merge(import_target)
            self.merge_fast(child_frames,parent=parent_block)
        elif mode =="full":
            self.merge_fast(child_frames,parent=parent_block)      #
            print 'Merged %s (%d defs) in %s mode, now have %d defs' % (target_key,len(child_frames),
               mode,len(self))
        # it will never happen again... 
        del self[self.master_key]["_import.get"]
                
def resolve_path(self,file_loc):
    import urlparse
    url_comps = urlparse.urlparse(file_loc)
    if url_comps[0]: return file_loc    #already full URI
    new_url = urlparse.urljoin(self.dic_as_cif.my_uri,file_loc)
    #print "Transformed %s to %s for import " % (file_loc,new_url)
    return new_url
    

Merging a whole dictionary. A dictionary is a collection of categories for the purposes of merging (later we may want to keep some audit information).

<Add another DDLM dictionary>=
def get_whole_dict(self,source_dict,on_dupl,on_miss):
    print "Cat_map: `%s`" % source_dict.cat_map.values()
    for source_cat in source_dict.cat_map.values():
        self.get_one_cat(source_dict,source_cat,on_dupl,on_miss)
   

Merging a single category. If this category does not exist, we simply add the category block and any members of the category. If it does exist, we use the 'on_dupl' flag to resolve our behaviour, either ignoring, replacing, or dying a horrible death.

If the specified block is missing in the external dictionary, we either skip it or die a horrible death.

<Add an external DDLM category>=
def get_one_cat(self,source_dict,source_cat,on_dupl,on_miss):
    ext_cat = source_dict.get(source_cat,"")
    this_cat = self.get(source_cat,"")
    print "Adding category %s" % source_cat
    if not ext_cat:
        if on_miss == "Ignore":
           pass
        else:
           raise CifError, "Missing category %s" % source_cat 
    else:
        all_ext_defns = source_dict.keys()
        cat_list = filter(lambda a:source_dict[a].get("_name.category_id","").lower()==source_cat.lower(),
                           all_ext_defns) 
        print "Items: %s" % `cat_list`
        if this_cat:     # The category block itself is duplicated
            if on_dupl=="Ignore":
                pass
            elif on_dupl == "Exit":
                raise CifError, "Duplicate category %s" % source_cat
            else: 
                self[source_cat] = ext_cat
        else:
            self[source_cat] = ext_cat
        # now do all member definitions
        for cat_defn in cat_list:
            self.add_one_defn(source_dict,cat_defn,on_dupl)

def add_one_defn(self,source_dict,cat_defn,on_dupl):
    if self.has_key(cat_defn):
       if on_dupl == "Ignore": pass
       elif on_dupl == "Exit": 
               raise CifError, "Duplicate definition %s" % cat_defn
       else: self[cat_defn] = source_dict[cat_defn]
    else: self[cat_defn] = source_dict[cat_defn]
    print "    "+cat_defn
    

This actually follows the children of the category down. We get a list of child categories and add them one by one recursively.

<Add an external DDLM category with children>=
def get_one_cat_with_children(self,source_dict,source_cat,on_dupl,on_miss):
    self.get_one_cat(source_dict,source_cat,on_dupl,on_miss)
    child_cats = filter(lambda a:source_dict[a]["_category.parent_id"]==source_dict[source_cat]["_definition.id"],source_dict.cat_map.values())
    for child_cat in child_cats: self.get_one_cat(source_dict,child_cat,on_dupl,on_miss) 

Importing into definitions. We are adjusting only the attributes of a single definition.

<Add attributes to definitions>=
def import_attributes(self,mykey,source_dict,source_def,on_dupl,on_miss):
    # process missing 
    if not source_dict.has_key(source_def): 
        if on_miss == 'Exit':
            raise CifError, 'Missing definition for import %s' % source_def
        else: return          #nothing else to do
    # now do the import
    print 'Adding attributes from %s to %s' % (source_def,mykey)
    self[mykey].merge(source_dict[source_def],mode='replace',match_att= \
          ['_definition.id','_name.category_id','_name.object_id'])

def import_loop(self,mykey,source_dict,source_def,loop_name,on_miss):
    # process imssing
    if not source_dict.has_key(source_def): 
        if on_miss == 'Exit':
            raise CifError, 'Missing definition for import %s' % source_def
        else: return          #nothing else to do
    print 'Adding %s attributes from %s to %s' % (loop_name,source_def,mykey)
    state_loop = source_dict[source_def].GetLoop(loop_name)
    self[mykey].insert_loop(state_loop) 
   

SectionValidation

These validation checks are intended to be called externally. They return a dictionary keyed by item name with value being a list of the results of the check functions. The individual functions return a dictionary which contains at least the key "result", and in case of error relevant keys relating to the error.

<Run validation tests>= (<-U)
def run_item_validation(self,item_name,item_value):
    return {item_name:map(lambda f:(f.__name__,f(item_name,item_value)),self.item_validation_funs)}

def run_loop_validation(self,loop_names):
    return {loop_names[0]:map(lambda f:(f.__name__,f(loop_names)),self.loop_validation_funs)}

def run_global_validation(self,item_name,item_value,data_block,provisional_items={},globals={}):
    results = map(lambda f:(f.__name__,f(item_name,item_value,data_block,provisional_items,globals)),self.global_validation_funs)
    return {item_name:results}

def run_block_validation(self,whole_block,globals={},fake_mand=False):
    results = map(lambda f:(f.__name__,f(whole_block,globals,fake_mand)),self.block_validation_funs)
    # fix up the return values
    return {"whole_block":results}

Optimization: the dictionary validation routines normally retain no history of what has been checked, as they are executed on a per-item basis. This leads to duplication of the uniqueness check, when there is more than one key, and duplication of the parent-child check, once for the parent and once for the child. By switching on optimisation, a record is kept and these checks will not be repeated. This is safe only if none of the relevant items is altered while optimisation is on, and optimisation should be switched off as soon as all the checks are finished.

<Optimisation on/off>= (<-U)
def optimize_on(self):
    self.optimize = True
    self.done_keys = []
    self.done_children = []
    self.done_parents = []

def optimize_off(self):
    self.optimize = False
    self.done_keys = []
    self.done_children = []
    self.done_parents = []

Some things are independent of where an item occurs in the file; we check those things here. All functions are expected to return a dictionary with at least one key: "result", as well as optional keys depending on the type of error.

<Item-level validation>= (<-U)
<Validate the type of an item (DDL1/2)>
<Validate the type of an item (DDLm)>
<Validate esd presence>
<Validate enumeration range>
<Validate an enumeration>
<Validate looping properties>

Validate the type of an item

We use the expressions for type that we have available to check that the type of the item passed to us matches up. We may have a list of items, so be aware of that. We define a tiny matching function so that we don't have to do a double match to catch the non-matching case, which returns None and thus an attribute error if we immediately try to get a group.

Note also that none of the extant dictionaries use the 'none' or 'seq' values for type. The seq value in particular would complicate matters.

<Validate the type of an item (DDL1/2)>= (<-U)
def validate_item_type(self,item_name,item_value):
    def mymatch(m,a):  
        res = m.match(a)
        if res != None: return res.group() 
        else: return ""
    target_type = self[item_name].get(self.type_spec) 
    if target_type == None:          # e.g. a category definition
        return {"result":True}                  # not restricted in any way
    matchexpr = self.typedic[target_type]
    item_values = listify(item_value)
    #for item in item_values:
        #print "Type match " + item_name + " " + item + ":",
    #skip dots and question marks
    check_all = filter(lambda a: a !="." and a != "?",item_values)
    check_all = filter(lambda a: mymatch(matchexpr,a) != a, check_all)
    if len(check_all)>0: return {"result":False,"bad_values":check_all}
    else: return {"result":True}

DDLm types are far more nuanced, and we are not provided with prepacked regular expressions in order to check them. We have identified the following checks: that the type is in the correct container; that the contents are as described in _type.contents; that 'State' purpose datanames have a list of enumerated states; that 'Link' purpose datanames have '_name.linked_item_id' in the same definition; that 'SU' purpose datanames also has the above.

<Validate the type of an item (DDLm)>= (<-U)
def decide(self,result_list):
    """Construct the return list"""
    if len(result_list)==0:
           return {"result":True}
    else:
           return {"result":False,"bad_values":result_list}

def validate_item_container(self, item_name,item_value):
    container_type = self[item_name][_type.container]
    item_values = listify(item_value)
    if container_type == 'Single':
       okcheck = [a for a in item_values if not isinstance(a,(int,float,long,basestring))]
       return decide(okcheck)
    if container_type in ('Multiple','List'):
       okcheck = [a for a in item_values if not isinstance(a,StarList)]
       return decide(okcheck)
    if container_type == 'Array':    #A list with numerical values
       okcheck = [a for a in item_values if not isinstance(a,StarList)]
       first_check = decide(okcheck)
       if not first_check['result']: return first_check
       #num_check = [a for a in item_values if len([b for b in a if not isinstance

Esds. Numbers are sometimes not allowed to have esds appended. The default is that esds are not OK, and we should also skip anything that has character type, as that is automatically not a candidate for esds.

Note that we make use of the primitive type here; there are some cases where a string type looks like an esd, so unless we know we have a number we ignore these cases.

<Validate esd presence>= (<-U)
def validate_item_esd(self,item_name,item_value):
    if self[item_name].get(self.primitive_type) != 'numb':
        return {"result":None}
    can_esd = self[item_name].get(self.esd_spec,"none") == "esd" 
    if can_esd: return {"result":True}         #must be OK!
    item_values = listify(item_value)
    check_all = filter(lambda a: get_number_with_esd(a)[1] != None, item_values)
    if len(check_all)>0: return {"result":False,"bad_values":check_all}
    return {"result":True}

Enumeration ranges. Our dictionary has been prepared as for a DDL2 dictionary, where loops are used to specify closed or open ranges: if an entry exists where maximum and minimum values are equal, this means that this value is included in the range; otherwise, ranges are open. Our value is already numerical.

<Validate enumeration range>= (<-U)
def validate_enum_range(self,item_name,item_value):
    if not self[item_name].has_key("_item_range.minimum") and \
       not self[item_name].has_key("_item_range.maximum"):
        return {"result":None}
    minvals = self[item_name].get("_item_range.minimum",default = ["."])
    maxvals = self[item_name].get("_item_range.maximum",default = ["."])
    def makefloat(a):
        if a == ".": return a
        else: return float(a)
    maxvals = map(makefloat, maxvals)
    minvals = map(makefloat, minvals)
    rangelist = map(None,minvals,maxvals)
    item_values = listify(item_value)
    def map_check(rangelist,item_value):
        if item_value == "?" or item_value == ".": return True
        iv,esd = get_number_with_esd(item_value)
        if iv==None: return None  #shouldn't happen as is numb type
        for lower,upper in rangelist:
            #check the minima
            if lower == ".": lower = iv - 1
            if upper == ".": upper = iv + 1
            if iv > lower and iv < upper: return True
            if upper == lower and iv == upper: return True
        # debug
        # print "Value %s fails range check %d < x < %d" % (item_value,lower,upper)
        return False
    check_all = filter(lambda a,b=rangelist: map_check(b,a) != True, item_values)
    if len(check_all)>0: return {"result":False,"bad_values":check_all}
    else: return {"result":True}
            

Note that we must make a copy of the enum list, otherwise when we add in our ? and . they will modify the Cif in place, very sneakily, and next time we have a loop length check, e.g. in writing out, we will probably have a mismatch.

<Validate an enumeration>= (<-U)
def validate_item_enum(self,item_name,item_value):
    try: 
        enum_list = self[item_name][self.enum_spec][:]
    except KeyError:
        return {"result":None}
    enum_list.append(".")   #default value
    enum_list.append("?")   #unknown
    item_values = listify(item_value)
    #print "Enum check: %s in %s" % (`item_values`,`enum_list`)
    check_all = filter(lambda a: a not in enum_list,item_values)
    if len(check_all)>0: return {"result":False,"bad_values":check_all}
    else: return {"result":True}

Check that something can be looped. For DDL1 we have yes, no and both, For DDL2 there is no explicit restriction on looping beyond membership in a category. Note that the DDL1 language specifies a default value of 'no' for this item, so when not explicitly allowed by the dictionary, listing is prohibited.

<Validate looping properties>= (<-U)
def validate_looping(self,item_name,item_value):
    try:
        must_loop = self[item_name][self.must_loop_spec]
    except KeyError:
        return {"result":None}
    if must_loop == 'yes' and isinstance(item_value,basestring): # not looped
        return {"result":False}      #this could be triggered
    if must_loop == 'no' and not isinstance(item_value,basestring): 
        return {"result":False}
    return {"result":True}

And some things are related to the group structure. Note that these functions do not require knowledge of the item values.

<Loop-level validation>= (<-U)
<Validate loop membership>
<Validate loop key>
<Validate loop mandatory items>
<Get alternative item names>

Loop membership. The most common constraints on a loop are that all items are from the same category, and that loops of a certain category must contain a certain key to be valid. The latter test should be performed after the former test.

<Validate loop membership>= (<-U)
def validate_loop_membership(self,loop_names):
    try:
        categories = map(lambda a:self[a][self.cat_spec],loop_names)
    except KeyError:       #category is mandatory
        raise ValidCifError( "%s missing from dictionary %s for item in loop containing %s" % (self.cat_spec,self.dicname,loop_names[0]))
    bad_items =  filter(lambda a:a != categories[0],categories)
    if len(bad_items)>0:
        return {"result":False,"bad_items":bad_items}
    else: return {"result":True}

The items specified by _list_mandatory (DDL1) must be present in a loop containing items of a given category (and it follows that only one loop in a given data block is available for any category containing such an item).

<Validate loop key>= (<-U)
def validate_loop_key(self,loop_names):
    category = self[loop_names[0]][self.cat_spec]
    # find any unique values which must be present 
    entry_name = self.cat_map[category]
    key_spec = self[entry_name].get("_category_mandatory.name",[])
    for names_to_check in key_spec:
        if isinstance(names_to_check,basestring):   #only one
            names_to_check = [names_to_check]
        for loop_key in names_to_check:
            if loop_key not in loop_names: 
                #is this one of those dang implicit items?
                if self[loop_key].get(self.must_exist_spec,None) == "implicit":
                    continue          #it is virtually there...
                alternates = self.get_alternates(loop_key)
                if alternates == []: 
                    return {"result":False,"bad_items":loop_key}
                for alt_names in alternates:
                    alt = filter(lambda a:a in loop_names,alt_names)
                    if len(alt) == 0: 
                        return {"result":False,"bad_items":loop_key}  # no alternates   
    return {"result":True}
    

The _list_reference value specifies data names which must co-occur with the defined data name. We check that this is indeed the case for all items in the loop. We trace through alternate values as well. In DDL1 dictionaries, a name terminating with an underscore indicates that any(?) corresponding name is suitable.

<Validate loop mandatory items>= (<-U)
def validate_loop_references(self,loop_names):
    must_haves = map(lambda a:self[a].get(self.list_ref_spec,None),loop_names)
    must_haves = filter(lambda a:a != None,must_haves)
    # build a flat list.  For efficiency we don't remove duplicates,as
    # we expect no more than the order of 10 or 20 looped names.
    def flat_func(a,b): 
        if isinstance(b,basestring): 
           a.append(b)       #single name
        else:
           a.extend(b)       #list of names
        return a
    flat_mh = reduce(flat_func,must_haves,[])
    group_mh = filter(lambda a:a[-1]=="_",flat_mh)
    single_mh = filter(lambda a:a[-1]!="_",flat_mh)
    res = filter(lambda a: a not in loop_names,single_mh)
    def check_gr(s_item, name_list):
        nl = map(lambda a:a[:len(s_item)],name_list)
        if s_item in nl: return True
        return False
    res_g = filter(lambda a:check_gr(a,loop_names),group_mh)
    if len(res) == 0 and len(res_g) == 0: return {"result":True}
    # construct alternate list
    alternates = map(lambda a: (a,self.get_alternates(a)),res)
    alternates = filter(lambda a:a[1] != [], alternates)
    # next two lines purely for error reporting
    missing_alts = filter(lambda a: a[1] == [], alternates)
    missing_alts = map(lambda a:a[0],missing_alts)
    if len(alternates) != len(res): 
       return {"result":False,"bad_items":missing_alts}   #short cut; at least one
                                                   #doesn't have an altern
    #loop over alternates
    for orig_name,alt_names in alternates:
         alt = filter(lambda a:a in loop_names,alt_names)
         if len(alt) == 0: return {"result":False,"bad_items":orig_name}# no alternates   
    return {"result":True}        #found alternates
         

A utility function to return a list of alternate names given a main name. In DDL2 we have to deal with aliases. Each aliased item appears in our normalised dictionary independently, so there is no need to resolve aliases when looking up a data name. However, the original definition using DDL2-type names is simply copied to this aliased name during normalisation, so all references to other item names (e.g. _item_dependent) have to be resolved using the present function.

These aliases are returned in any case, so if we had a data file which mixed DDL1 and DDL2 style names, it may turn out to be valid, and what's more, we wouldn't necessarily detect an error if a data name and its alias were present - need to ponder this.

The exclusive_only option will only return items which must not co-exist with the item name in the same datablock. This includes aliases, and allows us to do a check that items and their aliases are not present at the same time in a data file.

<Get alternative item names>= (<-U)
def get_alternates(self,main_name,exclusive_only=False):
    alternates = self[main_name].get(self.related_func,None)
    alt_names = []
    if alternates != None: 
        alt_names =  self[main_name].get(self.related_item,None)
        if isinstance(alt_names,basestring): 
            alt_names = [alt_names]
            alternates = [alternates]
        together = map(None,alt_names,alternates)
        if exclusive_only:
            alt_names = filter(lambda a:a[1]=="alternate_exclusive" \
                                         or a[1]=="replace", together)
        else:
            alt_names = filter(lambda a:a[1]=="alternate" or a[1]=="replace",together)
        alt_names = map(lambda a:a[0],alt_names)
    # now do the alias thing
    alias_names = listify(self[main_name].get("_item_aliases.alias_name",[]))
    alt_names.extend(alias_names)
    # print "Alternates for %s: %s" % (main_name,`alt_names`)
    return alt_names
    

Some checks require access to the entire data block. These functions take both a provisional dictionary and a global dictionary; the provisional dictionary includes items which will go into the dictionary together with the current item, and the global dictionary includes items which apply to all data blocks (this is for validation of DDL1/2 dictionaries).

<Cross-item validation>= (<-U)
<Validate exclusion rules>
<Validate parent child relations>
<Validate presence of dependents>
<Validate list uniqueness>

DDL2 dictionaries introduce the "alternate exclusive" category for related items. We also unilaterally include items listed in aliases as acting in this way.

<Validate exclusion rules>= (<-U)
def validate_exclusion(self,item_name,item_value,whole_block,provisional_items={},globals={}):
   alternates = map(lambda a:a.lower(),self.get_alternates(item_name,exclusive_only=True))
   item_name_list = map(lambda a:a.lower(),whole_block.keys())
   item_name_list.extend(map(lambda a:a.lower(),provisional_items.keys()))
   item_name_list.extend(map(lambda a:a.lower(),globals.keys()))
   bad = filter(lambda a:a in item_name_list,alternates)
   if len(bad)>0:
       print "Bad: %s, alternates %s" % (`bad`,`alternates`)
       return {"result":False,"bad_items":bad}
   else: return {"result":True}

When validating parent/child relations, we check the parent link to the children, and separately check that parents exist for any children present. Switching on optimisation will remove the redundancy in this procedure, but only if no changes are made to the relevant data items between the two checks.

It appears that DDL2 dictionaries allow parents to be absent if children take only unspecified values (i.e. dot or question mark). We catch this case.

The provisional items dictionary includes items that are going to be included with the present item (in a single loop structure) so the philosophy of inclusion must be all or nothing.

When validating DDL2 dictionaries themselves, we are allowed access to other definition blocks in order to resolve parent-child pointers. We will be able to find these save frames inside the globals dictionary (they will in this case be collected inside a CifBlock object).

When removing, we look at the item to make sure that no child items require it to be present.

<Validate parent child relations>= (<-U)
# validate that parent exists and contains matching values
def validate_parent(self,item_name,item_value,whole_block,provisional_items={},globals={}):
    parent_item = self[item_name].get(self.parent_spec)
    if not parent_item: return {"result":None}   #no parent specified
    if isinstance(parent_item,list): 
        parent_item = parent_item[0]
    if self.optimize:
        if parent_item in self.done_parents:
            return {"result":None}
        else: 
            self.done_parents.append(parent_item)
            print "Done parents %s" % `self.done_parents`
    # initialise parent/child values
    if isinstance(item_value,basestring):
        child_values = [item_value]
    else: child_values = item_value[:]    #copy for safety
    # track down the parent
    # print "Looking for %s parent item %s in %s" % (item_name,parent_item,`whole_block`)
    # if globals contains the parent values, we are doing a DDL2 dictionary, and so 
    # we have collected all parent values into the global block - so no need to search
    # for them elsewhere. 
    # print "Looking for %s" % `parent_item`
    parent_values = globals.get(parent_item)
    if not parent_values:
        parent_values = provisional_items.get(parent_item,whole_block.get(parent_item))
    if not parent_values:  
        # go for alternates
        namespace = whole_block.keys()
        namespace.extend(provisional_items.keys())
        namespace.extend(globals.keys())
        alt_names = filter_present(self.get_alternates(parent_item),namespace)
        if len(alt_names) == 0:
            if len(filter(lambda a:a != "." and a != "?",child_values))>0:
                return {"result":False,"parent":parent_item}#no parent available -> error
            else:
                return {"result":None}       #maybe True is more appropriate??
        parent_item = alt_names[0]           #should never be more than one?? 
        parent_values = provisional_items.get(parent_item,whole_block.get(parent_item))
        if not parent_values:   # check global block
            parent_values = globals.get(parent_item)
    if isinstance(parent_values,basestring):
        parent_values = [parent_values]   
    #print "Checking parent %s against %s, values %s/%s" % (parent_item,
    #                                          item_name,`parent_values`,`child_values`)
    missing = self.check_parent_child(parent_values,child_values)
    if len(missing) > 0:
        return {"result":False,"bad_values":missing,"parent":parent_item}
    return {"result":True}

def validate_child(self,item_name,item_value,whole_block,provisional_items={},globals={}):
    try:
        child_items = self[item_name][self.child_spec][:]  #copy
    except KeyError:
        return {"result":None}    #not relevant
    # special case for dictionaries  -> we check parents of children only
    if globals.has_key(item_name):  #dictionary so skip
        return {"result":None}
    if isinstance(child_items,basestring): # only one child
        child_items = [child_items]
    if isinstance(item_value,basestring): # single value
        parent_values = [item_value]
    else: parent_values = item_value[:]
    # expand child list with list of alternates
    for child_item in child_items[:]:
        child_items.extend(self.get_alternates(child_item))
    # now loop over the children
    for child_item in child_items:
        if self.optimize:
            if child_item in self.done_children:
                return {"result":None}
            else: 
                self.done_children.append(child_item)
                print "Done children %s" % `self.done_children`
        if provisional_items.has_key(child_item):
            child_values = provisional_items[child_item][:]
        elif whole_block.has_key(child_item):
            child_values = whole_block[child_item][:]
        else:  continue 
        if isinstance(child_values,basestring):
            child_values = [child_values]
        #    print "Checking child %s against %s, values %s/%s" % (child_item,
        #                                          item_name,`child_values`,`parent_values`)
        missing = self.check_parent_child(parent_values,child_values)
        if len(missing)>0:
            return {"result":False,"bad_values":missing,"child":child_item}
    return {"result":True}       #could mean that no child items present
       
#a generic checker: all child vals should appear in parent_vals
def check_parent_child(self,parent_vals,child_vals):
    # shield ourselves from dots and question marks
    pv = parent_vals[:]
    pv.extend([".","?"])
    res =  filter(lambda a:a not in pv,child_vals)
    #print "Missing: %s" % res
    return res

def validate_remove_parent_child(self,item_name,whole_block):
    try:
        child_items = self[item_name][self.child_spec]
    except KeyError:
        return {"result":None}
    if isinstance(child_items,basestring): # only one child
        child_items = [child_items]
    for child_item in child_items:
        if whole_block.has_key(child_item): 
            return {"result":False,"child":child_item}
    return {"result":True}
     

The DDL2 _item_dependent attribute at first glance appears to be the same as _list_reference, however the dependent item does not have to appear in a loop at all, and neither does the other item name. Perhaps this behaviour was intended to be implied by having looped _names in DDL1 dictionaries, but we can't be sure and so don't implement this yet.

<Validate presence of dependents>= (<-U)
def validate_dependents(self,item_name,item_value,whole_block,prov={},globals={}):
    try:
        dep_items = self[item_name][self.dep_spec][:]
    except KeyError:
        return {"result":None}    #not relevant
    if isinstance(dep_items,basestring):
        dep_items = [dep_items]
    actual_names = whole_block.keys()
    actual_names.extend(prov.keys())
    actual_names.extend(globals.keys())
    missing = filter(lambda a:a not in actual_names,dep_items)
    if len(missing) > 0:
        alternates = map(lambda a:[self.get_alternates(a),a],missing)
        # compact way to get a list of alternative items which are 
        # present
        have_check = map(lambda b:[filter_present(b[0],actual_names),
                                   b[1]],alternates) 
        have_check = filter(lambda a:len(a[0])==0,have_check)
        if len(have_check) > 0:
            have_check = map(lambda a:a[1],have_check)
            return {"result":False,"bad_items":have_check}
    return {"result":True}
    

The _list_uniqueness attribute permits specification of a single or multiple items which must have a unique combined value. Currently it is only used in the powder dictionary to indicate that peaks must have a unique index and in the core dictionary to indicate the a publication section name with its label must be unique; however it would appear to implicitly apply to any index-type value in any dictionary. This is used precisely once in the cif_core dictionary in a non-intuitive manner, but we code for this here. The value of the _list_uniqueness attribute can actually refer to another data name, which together with the defined name must be unique.

DDL2 dictionaries do away with separate _list_mandatory and _list_uniqueness attributes, instead using a _category_key. If multiple keys are specified, it appears that they must be unique in combination, judging from the way that _publ_body.label and _publ_body.element are supposed to copy the cif_core dictionary.

<Validate list uniqueness>= (<-U)
def validate_uniqueness(self,item_name,item_value,whole_block,provisional_items={},
                                                              globals={}):
    category = self[item_name].get(self.cat_spec)
    if category == None:
        print "No category found for %s" % item_name
        return {"result":None}
    # print "Category %s for item %s" % (`category`,item_name)
    catentry = self.cat_map[category]
    # we make a copy in the following as we will be removing stuff later!
    unique_i = self[catentry].get("_category_key.name",[])[:]
    if isinstance(unique_i,basestring):
        unique_i = [unique_i]
    if item_name not in unique_i:       #no need to verify
        return {"result":None}
    if isinstance(item_value,basestring):  #not looped
        return {"result":None}
    # print "Checking %s -> %s -> %s ->Unique: " % (item_name,category,catentry) + `unique_i`
    # check that we can't optimize by not doing this check
    if self.optimize:
        if unique_i in self.done_keys:
            return {"result":None}
        else:
            self.done_keys.append(unique_i)
    val_list = []
    # get the matching data from any other data items
    unique_i.remove(item_name)
    other_data = []
    if len(unique_i) > 0:            # i.e. do have others to think about
       for other_name in unique_i:
       # we look for the value first in the provisional dict, then the main block
       # the logic being that anything in the provisional dict overrides the
       # main block
           if provisional_items.has_key(other_name):
               other_data.append(provisional_items[other_name]) 
           elif whole_block.has_key(other_name):
               other_data.append(whole_block[other_name])
           elif self[other_name].get(self.must_exist_spec)=="implicit":
               other_data.append([item_name]*len(item_value))  #placeholder
           else:
               return {"result":False,"bad_items":other_name}#missing data name
    # ok, so we go through all of our values
    # this works by comparing lists of strings to one other, and
    # so could be fooled if you think that '1.' and '1' are 
    # identical
    for i in range(len(item_value)):
        #print "Value no. %d" % i ,
        this_entry = item_value[i]
        for j in range(len(other_data)):
            this_entry = " ".join([this_entry,other_data[j][i]]) 
        #print "Looking for %s in %s: " % (`this_entry`,`val_list`)
        if this_entry in val_list: 
            return {"result":False,"bad_values":this_entry}
        val_list.append(this_entry)
    return {"result":True}

<Block-level validation>= (<-U)
<Validate category presence>
<Fake mandatory category information>

DDL2 introduces a new idea, that of a mandatory category, items of which must be present. We check only this particular fact, and leave the checks for mandatory items within the category, keys etc. to the relevant routines. This would appear to be applicable to dictionaries only.

Also, although the natural meaning for a DDL2 dictionary would be that items from these categories must appear in every definition block, this is not what happens in practice, as category definitions do not have anything from the (mandatory) _item_description category. We therefore adopt the supremely useless meaning that mandatory categories in a dictionary context mean only that somewhere, maybe in only one save frame, an item from this category exists. This interpretation is forced by using the "fake_mand" argument, which then assumes that the alternative routine will be used to set the error information on a dictionary-wide basis.

Note that this routine is constructed such that not all mandatory categories are checked in the event of a single failure.

<Validate category presence>= (<-U)
def validate_mandatory_category(self,whole_block,globals={},fake_mand=False):
    if fake_mand:
        return {"result":True}
    mand_cats = filter(lambda a:self[a].get("_category.mandatory_code","no")=="yes",
                self.keys())
    # map to actual ids
    catlist = self.cat_map.items()
    # print "Mandatory categories - %s" % `mand_cats`
    all_keys = whole_block.keys() #non-save block keys
    if globals:         #
        all_keys.extend(globals.abs_all_keys)
    for mand_cat in mand_cats:
        cat_id = filter(lambda a:a[1]==mand_cat,catlist)[0][0]
        no_of_items = len(filter(lambda a:self[a].get(self.cat_spec)==cat_id,
                             all_keys))
        if no_of_items == 0:
            return {"result":False,"bad_items":cat_id}
    return {"result":True}

Given a block containing save frames, this routine compiles a list of categories which appear in any of the save frames. It then returns a list of mandatory categories which are not present in the save frames or enclosing block.

This is for efficiency; as the validation routines are constructed on a per-block basis, and we try to treat save frames as block equivalents, we don't want to repeat checks more than we have to.

As this is run outside the normal procedure, which is block based, we return the information in the same way that the run_data_checks routine would format it.

<Fake mandatory category information>= (<-U)
def find_prob_cats(self,whole_block):
    mand_cats = filter(lambda a:self[a].get("_category.mandatory_code","no")=="yes",
                self.keys())
    # map to actual ids
    catlist = self.cat_map.items()
    # find missing categories
    wbs = whole_block["saves"]
    abs_all_keys = whole_block.keys()
    abs_all_keys.extend(reduce(lambda a,b:a+(wbs[b].keys()),wbs.keys(),[]))
    prob_cats = []
    for mand_cat in mand_cats:
        cat_id = filter(lambda a:a[1]==mand_cat,catlist)[0][0]
        
        if len(filter(lambda a:self[a].get(self.cat_spec)==cat_id,abs_all_keys))==0:
            prob_cats.append(cat_id)
    if len(prob_cats) > 0:
        return (False,{'whole_block':[('validate_mandatory_category',{"result":False,"bad_items":problem_cats})]})
    else:
        return (True,{})

Preparing our type expressions

In DDL2 dictionaries our type expressions are given in the main block as POSIX regexps, so we can pass them on to the re package. For DDL1 dictionaries we could get them from the DDL1 language definition, but for now we just hard code them. Essentially only the number definition is important, as the syntax check during reading/writing will catch any char violations.

Note that the python re engine is not POSIX compliant in that it will not return the longest leftmost match, but rather the first leftmost match. John Bollinger suggested an obvious fix: we append a $ to force a full match.

In other regexp editing, the \{ sequence inside the character sets of some of the regexps is actually interpreted as an escaped bracket, so the backslash vanishes. We add it back in by doing a very hackish and ugly substitution which substitues these two characters anywhere that they occur inside square brackets. A final change is to insert a \r wherever we find a \n - it seems that this has been left out. After these changes, and appending on default expressions as well, we can now work with DDL2 expressions directly.

We keep the primitive code for the single reason that we need to know when we are dealing with a number that has an esd appended, and this is flagged by the primitive code being of type 'numb'.

<Add type information>= (<-U)
def add_type_info(self):
    if self.dic_as_cif[self.master_key].has_key("_item_type_list.construct"): 
        types = self.dic_as_cif[self.master_key]["_item_type_list.code"]
        prim_types = self.dic_as_cif[self.master_key]["_item_type_list.primitive_code"]
        constructs = map(lambda a: a + "$", self.dic_as_cif[self.master_key]["_item_type_list.construct"])
        # add in \r wherever we see \n, and change \{ to \\{
        def regex_fiddle(mm_regex):
            brack_match = r"((.*\[.+)(\\{)(.*\].*))" 
            ret_match = r"((.*\[.+)(\\n)(.*\].*))" 
            fixed_regexp = mm_regex[:]  #copy
            # fix the brackets
            bm = re.match(brack_match,mm_regex)
            if bm != None: 
                fixed_regexp = bm.expand(r"\2\\\\{\4")
            # fix missing \r
            rm = re.match(ret_match,fixed_regexp)
            if rm != None:
                fixed_regexp = rm.expand(r"\2\3\\r\4")    
            #print "Regexp %s becomes %s" % (mm_regex,fixed_regexp)
            return fixed_regexp
        constructs = map(regex_fiddle,constructs)
        packed_up = map(None,types,constructs)
        for typecode,construct in packed_up:
            self.typedic[typecode] = re.compile(construct,re.MULTILINE|re.DOTALL)
        # now make a primitive <-> type construct mapping
        packed_up = map(None,types,prim_types)
        for typecode,primtype in packed_up:
            self.primdic[typecode] = primtype

Linkage to dREL

The drel_ast_yacc package will generate an Abstract Syntax Tree, which we then convert to a Python function using py_from_ast.make_function. We use it during initialisation to transform all methods to python expressions, and then the derive_item method will use this to try to derive the expression.

The make_function function requires dictionary information to be supplied regarding looped categories and keys.

If we were really serious about dictionary-driven software, the attribute lookups that follow would not use get(), but square brackets and allow default values to be returned. However, that would require assigning a dictionary to the dictionary and consequent automated searches which I can't be bothered to do at this stage. Just be aware that the default value in the get() statement is the _enumeration.default specified in ddl.dic...

<Transform drel to python>= (<-U)
def transform_drel(self):
    from drel import drel_ast_yacc
    from drel import py_from_ast
    import traceback
    parser = drel_ast_yacc.parser
    my_namespace = self.keys()
    my_namespace = dict(map(None,my_namespace,my_namespace)) 
    # we provide a table of loopable categories {cat_name:(key,[item_name,...]),...})
    loopable_cats = [a for a in self.keys() if self[a].get("_definition.class","Set")=="Loop"]
    loop_keys = [self[a]["_category.key_id"].split(".")[1] for a in loopable_cats]
    cat_names = [self.names_in_cat(a,names_only=True) for a in loopable_cats]
    loop_info = dict(zip(loopable_cats,zip(loop_keys,cat_names)))
    # parser.listable_items = [a for a in self.keys() if "*" in self[a].get("_type.dimension","")] 
    derivable_list = [a for a in self.keys() if self[a].has_key("_method.expression") \
                          and self[a].get("_definition.scope","")!='Category' \
                          and self[a].get("_name.category_id","")!= "function"]
    for derivable in derivable_list:
        target_id = derivable
        # reset the list of visible names for parser
        special_ids = [dict(map(None,self.keys(),self.keys()))]
        print "Target id: %s" % derivable
        drel_expr = self[derivable]["_method.expression"]
        if isinstance(drel_expr,list):
           try:
                drel_expr = self[derivable].GetKeyedPacket("_method.purpose","Evaluation")
                drel_expr = getattr(drel_expr,'_method.expression')
           except (KeyError,ValueError):
                print 'No evaluation method found, skipping' 
                continue
        # print "Transforming %s" % drel_expr
        # List categories are treated differently...
        try:
            meth_ast = parser.parse(drel_expr+"\n")
        except:
            print 'Syntax error in method for %s; leaving as is' % derivable
            a,b = sys.exc_info()[:2]
            print `a`,`b`
            print traceback.print_tb(sys.exc_info()[-1],None,sys.stdout)
            continue
        # Construct the python method
        pyth_meth = py_from_ast.make_python_function(meth_ast,"pyfunc",target_id,
                                                                   loopable=loop_info,
                                                     cif_dic = self)
        save_overwrite = self[derivable].overwrite
        self[derivable].overwrite = True
        self[derivable]["_method.py_expression"] = pyth_meth
        self[derivable].overwrite = save_overwrite
        #print "Final result:\n " + self[derivable]["_method.py_expression"]

Drel functions are all stored in category 'functions' in our final dictionary. We want to convert them to executable python code and store them in an appropriate namespace which we can then pass to our individual item methods.

<Store dREL functions>= (<-U)
def add_drel_funcs(self):
    from drel import drel_ast_yacc
    from drel import py_from_ast
    funclist = [a for a in self.keys() if self[a].get("_name.category_id","")=='function']
    funcnames = [(self[a]["_name.object_id"],
                  getattr(self[a].GetKeyedPacket("_method.purpose","Evaluation"),"_method.expression")) for a in funclist]
    # create executable python code...
    parser = drel_ast_yacc.parser
    # we provide a table of loopable categories {cat_name:(key,[item_name,...]),...})
    loopable_cats = [a for a in self.keys() if self[a].get("_definition.class","Set")=="Loop"]
    loop_keys = [self[a]["_category.key_id"].split(".")[1] for a in loopable_cats]
    cat_names = [self.names_in_cat(a,names_only=True) for a in loopable_cats]
    loop_info = dict(zip(loopable_cats,zip(loop_keys,cat_names)))
    for funcname,funcbody in funcnames:
        parser.target_id = funcname
        res_ast = parser.parse(funcbody)
        py_function = py_from_ast.make_python_function(res_ast,None,targetname=funcname,func_def=True,loopable=loop_info,cif_dic = self)
        #print 'dREL library function ->\n' + py_function
        global_table = globals()
        # global_table.update(self.ddlm_functions)
        exec py_function in global_table    #add to namespace
    #print "All functions -> " + `self.ddlm_functions`

When a dictionary is available during CIF file access, we can resolve a missing dataname in three ways: (1) check if it is defined under an alias; (2) use a dREL method to calculate the value; (3) use default values if defined. We resolve in this priority. Note that we also convert to the appropriate type.

The store_value flag asks us to update the ciffile object with the new value. We remove any numpy dependencies before doing this, which means that we must recreate the numpy type when returning it.

<Derive item information>= (<-U)
def derive_item(self,start_key,cifdata,store_value = False):
    key = start_key   #starting value
    result = None     #success is a non-None value
    <Resolve using aliases>
    # store any default value in case we have a problem
    def_val = self[key].get("_enumeration.default","")
    def_index_val = self[key].get("_enumeration.def_index_id","")
    the_func = self[key].get('_method.py_expression',"")
    if the_func:   #attempt to calculate it
        <Execute pythonised dREL method>
    if result is None:   # try defaults
        <Work out default value of dataname>
    # read it in
    if result is None:   #can't do anything else
        print 'Warning: no way of deriving item %s' % key
    # now try to insert the new information into the right place
    # find if items of this category already appear...
    if store_value: 
          # try to change any matrices etc. to lists
          the_category = self[key]["_name.category_id"]
          if self[the_category].get('_definition.class','Set')=='Loop':
              # our result is looped, this only gets the simple structures
              if result is not None and hasattr(result[0],'dtype'):    #numpy object
                  if result[0].size > 1:   #so is not a float
                      out_result = [StarFile.StarList(a.tolist()) for a in result]
                  else:
                      def conv_from_numpy(maybe_numpy):
                          try:
                              return maybe_numpy.item(0)
                          except:
                              return maybe_numpy
                      out_result = [conv_from_numpy(a) for a in result]
              else:
                  out_result = result
              # so out_result now contains a value suitable for storage
              cat_names = [a for a in self.keys() if self[a].get("_name.category_id",None)==the_category]
              has_cat_names = [a for a in cat_names if cifdata.has_key(a)]
              print 'Found pre-existing names: ' + `has_cat_names`
              if len(has_cat_names)>0:   #this category already exists
                  if out_result is None: #need to create a list of Nones
                      loop_len = len(cifdata[has_cat_names[0]])
                      out_result = [None]*loop_len
                      result = out_result
              cifdata[key] = out_result      #lengths must match or else!!
              print 'Loop info:' + `cifdata.loops`
              cifdata.AddLoopName(has_cat_names[0],key)
          else:    #not a looped category 
              if result is not None and hasattr(result,'dtype'):
                  if result.size > 1:
                      out_result = StarFile.StarList(result.tolist())
                      cifdata[key] = out_result
                  else:
                      cifdata[key] = result.item(0)
              else:
                  cifdata[key] = result
    return result

Executing a dREL method. The execution defines a function, 'pyfunc' which is then itself executed in global scope.

<Execute pythonised dREL method>= (<-U)
#global_table = globals()
#global_table.update(self.ddlm_functions)
print 'Executing function for %s:' % key
#print the_func
exec the_func in globals(),locals() #will access dREL functions, puts "pyfunc" in scope
# print 'in following global environment: ' + `global_table`
stored_setting = cifdata.provide_value
cifdata.provide_value = True
result = pyfunc(cifdata)
cifdata.provide_value = stored_setting
#print "Function returned %s" % `result`

Aliases. If we have this item under a different name, find it and return it immediately after putting it into the correct type. We could be passed either the dictionary defined dataname, or any of its previous names. We have stored our aliases as a table indexed by dictionary-defined dataname in order to potentially translate from old to new datanames (not yet implemented). Once we find a dataname that is present in the datafile, we return it. Note that we have two types of check: in one we are given an old-style dataname, and have to find the new or other old version (in which case we have to check the key of the table) and in the other check we are given the latest version of the dataname and have to check for older names in the datafile - this latter is the dREL situation so we have optimised for it be checking that first and making the modern datanames the table keys.

<Resolve using aliases>= (<-U)
# check for aliases
# check for an older form of a new value
found_it = [k for k in self.alias_table.get(key,[]) if cifdata.has_key(k)]
if len(found_it)>0:
    corrected_type = self.change_type(key,cifdata[found_it[0]])
    return corrected_type
# now do the reverse check - any alternative form
alias_name = [a for a in self.alias_table.items() if key in a[1]]
print 'Aliases for %s: %s' % (key,`alias_name`)
if len(alias_name)==1:
    key = alias_name[0][0]   #actual definition name
    if cifdata.has_key(key): return self.change_type(key,cifdata[key])
    found_it = [k for k in alias_name[0][1] if cifdata.has_key(k)]
    if len(found_it)>0:
        return self.change_type(key,cifdata[found_it[0]])
elif len(alias_name)>1:
    raise CifError, 'Dictionary error: dataname alias appears in different definitions: ' + `alias_name`

Using the defaults system. We also check out any default values which we could return in case of error. Note that ddlm adds the '_enumerations.def_index_id' as an alternative way to derive a value from a table. During development, we deliberately allow errors arising from the method to be propagated so that we can see anything that might be wrong.

<Work out default value of dataname>= (<-U)
if def_val: return self.change_type(key,def_val)
if def_index_val:            #derive a default value
    index_vals = self[key]["_enumeration_default.index"]
    val_to_index = cifdata[def_index_val]     #what we are keying on
    if self[def_index_val]['_type.contents'] in ['Code','Name','Tag']:
        lcase_comp = True
        index_vals = [a.lower() for a in index_vals]
    # Handle loops
    if isinstance(val_to_index,list):
        if lcase_comp:
            val_to_index = [a.lower() for a in val_to_index]
        keypos = [index_vals.index(a) for a in val_to_index]
        result = map(lambda a:self[key]["_enumeration_default.value"][a] ,keypos)
    else:
        if lcase_comp:
            val_to_index = val_to_index.lower()
        keypos = index_vals.index(val_to_index)   #value error if no such value available
        result = self[key]["_enumeration_default.value"][keypos]
    result = self.change_type(key,result)
    print "Indexed on %s to get %s for %s" % (def_index_val,`result`,`val_to_index`)

In the single case of executing dREL methods, we wish to return numpy Arrays from our __getitem__ so that the mathematical operations proceed as expected for matrix etc. objects. This needs to be reimplimented: currently numpy must be installed for 'numerification' to work.

<Switch on numpy arrays>= (<-U)
def switch_numpy(self,to_val):
    pass

This function converts the string-valued items returned from the parser into types that correspond to the dictionary specifications. For DDLm it must also deal with potentially complex structures containing both strings and numbers. We have tried to avoid introducing a dependence on Numpy in general for PyCIFRW, but once we get into the realm of DDLm we require Numpy arrays in order to handle the various processing tasks. This routine is the one that will create the arrays from the StarList types, so needs access to numpy. However, this routine is only called if a DDLm dictionary has been provided, so we should still have no Numpy dependence for non DDLm cases

For safety, we check that our object is really string-valued. In practice, this means that it is either a string, a list of strings, or a list of StarLists as these are the only datastructures that an as-parsed file will contain.

<Convert string to appropriate type>= (<-U)
def change_type(self,itemname,inval):
    import numpy
    if inval == "?": return inval
    change_function = convert_type(self[itemname])
    if isinstance(inval,list) and not isinstance(inval,StarFile.StarList):   #from a loop
        newval = map(change_function,inval)
    else: 
        newval = change_function(inval)
    return newval

We may be passed float values which have esds appended. We catch this case by searching for an opening round bracket

<Convert value to float, ignore esd>= (U->)
def float_with_esd(inval):
    if isinstance(inval,basestring):
        j = inval.find("(")
        if j>=0:  return float(inval[:j])
    return float(inval)
        
    
                

This function analyses a DDL1-type range expression, returning a maximum and minimum value. If the number format were ever to change, we need to change this right here, right now.

<Analyse range>= (<-U)
def getmaxmin(self,rangeexp):
    regexp = '(-?(([0-9]*[.]([0-9]+))|([0-9]+)[.]?)([eEdD][+-]?[0-9]+)?)*' 
    regexp = regexp + ":" + regexp
    regexp = re.match(regexp,rangeexp)
    try:
        minimum = regexp.group(1)
        maximum = regexp.group(7)
    except AttributeError:
        print "Can't match %s" % rangeexp
    if minimum == None: minimum = "." 
    else: minimum = float(minimum)
    if maximum == None: maximum = "." 
    else: maximum = float(maximum)
    return maximum,minimum

Valid CIFS

A whole new can of worms is opened up when we require that a CIF is not only syntactically correct, but valid according to the specified dictionary.

A valid CIF is essentially a collection of valid CIF blocks. It may be the case in the future that inter-block relationships need to be checked, so we define a separate ValidCifFile class.

<A valid CIF block>= (<-U)
class ValidCifBlock(CifBlock):
    """A `CifBlock` that is valid with respect to a given CIF dictionary.  Methods
    of `CifBlock` are overridden where necessary to disallow addition of invalid items to the
    `CifBlock`.

    ## Initialisation
 
    * `dic` is a `CifDic` object to be used for validation.

    """
    <Initialise with dictionary>
    <Run data checks>
    <Check input data>
    <Redefine item adding and removing>
    <Validation report>

The dic argument contains a previously initialised dictionary. We can alternatively provide a list of filenames/CifFiles which are merged according to mergemode. Both cannot be provided.

<Initialise with dictionary>= (<-U)
def __init__(self,dic = None, diclist=[], mergemode = "replace",*args,**kwords):
    CifBlock.__init__(self,*args,**kwords)    
    if dic and diclist:
        print "Warning: diclist argument ignored when initialising ValidCifBlock"
    if isinstance(dic,CifDic):
        self.fulldic = dic
    else:
        raise TypeError( "ValidCifBlock passed non-CifDic type in dic argument")
    if len(diclist)==0 and not dic:
        raise ValidCifError( "At least one dictionary must be specified")
    if diclist and not dic:
        self.fulldic = merge_dic(diclist,mergemode)
    if not self.run_data_checks()[0]:
        raise ValidCifError( self.report())

Run all of these data checks. The dictionary validation methods return a list of tuples (validation function name, result) for each item. When checking a full data block, we can make use of the optimisation facilities provided in the CifDic object.

<Run data checks>= (<-U)
def run_data_checks(self,verbose=False):
    self.v_result = {}
    self.fulldic.optimize_on()
    for dataname in self.keys():
        update_value(self.v_result,self.fulldic.run_item_validation(dataname,self[dataname]))
        update_value(self.v_result,self.fulldic.run_global_validation(dataname,self[dataname],self))
    for loop_names in self.loops.values():
        update_value(self.v_result,self.fulldic.run_loop_validation(loop_names))
    # now run block-level checks
    update_value(self.v_result,self.fulldic.run_block_validation(self))
    # return false and list of baddies if anything didn't match
    self.fulldic.optimize_off()
    for test_key in self.v_result.keys():
        #print "%s: %s" % (test_key,`self.v_result[test_key]`)
        self.v_result[test_key] = filter(lambda a:a[1]["result"]==False,self.v_result[test_key])
        if len(self.v_result[test_key]) == 0: 
            del self.v_result[test_key]
    isvalid = len(self.v_result)==0
    #if not isvalid:
    #    print "Baddies:" + `self.v_result`
    return isvalid,self.v_result

Report back. We summarize the contents of v_result. This routine is probably broken.

<Validation report>= (<-U)
def report(self):
   import cStringIO
   outstr = cStringIO.StringIO()
   outstr.write( "Validation results\n")
   outstr.write( "------------------\n")
   print "%d invalid items found\n" % len(self.v_result)
   for item_name,val_func_list in self.v_result.items():
       outstr.write("%s fails following tests:\n" % item_name)
       for val_func in val_func_list:
           outstr.write("\t%s\n")
   return outstr.getvalue()

It is not a mistake for a data name to be absent from any of the specified dictionaries, so we have to check that we have a match before running any data checks, rather than simply raising an error immediately.

<Check input data>= (<-U)
def single_item_check(self,item_name,item_value):
    #self.match_single_item(item_name)
    if not self.fulldic.has_key(item_name):
        result = {item_name:[]}
    else:
        result = self.fulldic.run_item_validation(item_name,item_value)
    baddies = filter(lambda a:a[1]["result"]==False, result[item_name])
    # if even one false one is found, this should trigger
    isvalid = (len(baddies) == 0)
    # if not isvalid: print "Failures for %s:" % item_name + `baddies`
    return isvalid,baddies

def loop_item_check(self,loop_names):
    in_dic_names = filter(lambda a:self.fulldic.has_key(a),loop_names)
    if len(in_dic_names)==0:
        result = {loop_names[0]:[]}
    else:
        result = self.fulldic.run_loop_validation(in_dic_names)
    baddies = filter(lambda a:a[1]["result"]==False,result[in_dic_names[0]])
    # if even one false one is found, this should trigger
    isvalid = (len(baddies) == 0)
    # if not isvalid: print "Failures for %s:" % `loop_names` + `baddies`
    return isvalid,baddies

def global_item_check(self,item_name,item_value,provisional_items={}):
    if not self.fulldic.has_key(item_name):
        result = {item_name:[]}
    else:
        result = self.fulldic.run_global_validation(item_name,
           item_value,self,provisional_items = provisional_items)
    baddies = filter(lambda a:a[1]["result"]==False,result[item_name])
    # if even one false one is found, this should trigger
    isvalid = (len(baddies) == 0)
    # if not isvalid: print "Failures for %s:" % item_name + `baddies`
    return isvalid,baddies

def remove_global_item_check(self,item_name):
    if not self.fulldic.has_key(item_name):
        result = {item_name:[]}
    else:
        result = self.fulldic.run_remove_global_validation(item_name,self,False)
    baddies = filter(lambda a:a[1]["result"]==False,result[item_name])
    # if even one false one is found, this should trigger
    isvalid = (len(baddies) == 0)
    # if not isvalid: print "Failures for %s:" % item_name + `baddies`
    return isvalid,baddies

We need to override the base class methods here to prevent addition of an item that would render an object invalid.

<Redefine item adding and removing>= (<-U)
<Add to looped data with validity checks> 
<Add straight data>

<Add straight data>= (<-U)
def AddCifItem(self,data):
    if isinstance(data[0],basestring):   # single item
        valid,problems = self.single_item_check(data[0],data[1])
        self.report_if_invalid(valid,problems,data[0])
        valid,problems = self.global_item_check(data[0],data[1])
        self.report_if_invalid(valid,problems,data[0])
    elif isinstance(data[0],tuple) or isinstance(data[0],list):
        paired_data = map(None,data[0],data[1])
        for name,value in paired_data:
            valid,problems = self.single_item_check(name,value) 
            self.report_if_invalid(valid,problems,name)
        valid,problems = self.loop_item_check(data[0])
        self.report_if_invalid(valid,problems,data[0])
        prov_dict = {}            # for storing temporary items
        for name,value in paired_data: prov_dict[name]=value
        for name,value in paired_data: 
            del prov_dict[name]   # remove temporarily
            valid,problems = self.global_item_check(name,value,prov_dict)
            prov_dict[name] = value  # add back in
            self.report_if_invalid(valid,problems,name)
    super(ValidCifBlock,self).AddCifItem(data)

def AddItem(self,key,value,**kwargs):
    """Set value of dataname `key` to `value` after checking for conformance with CIF dictionary"""
    valid,problems = self.single_item_check(key,value)
    self.report_if_invalid(valid,problems,key)
    valid,problems = self.global_item_check(key,value)
    self.report_if_invalid(valid,problems,key)
    super(ValidCifBlock,self).AddItem(key,value,**kwargs)

# utility function
def report_if_invalid(self,valid,bad_list,data_name):
    if not valid:
        error_string = reduce(lambda a,b: a + "," + b[0], bad_list, "") 
        error_string = `data_name` + " fails following validity checks: "  + error_string
        raise ValidCifError( error_string)

def __delitem__(self,key):
    # we don't need to run single item checks; we do need to run loop and
    # global checks.
    if self.has_key(key):
        try: 
            loop_items = self.GetLoop(key)
        except TypeError:
            loop_items = []
        if loop_items:             #need to check loop conformance
            loop_names = map(lambda a:a[0],loop_items)
            loop_names = filter(lambda a:a != key,loop_names)
            valid,problems = self.loop_item_check(loop_names)
            self.report_if_invalid(valid,problems)
        valid,problems = self.remove_global_item_check(key)
        self.report_if_invalid(valid,problems)
    self.RemoveCifItem(key)

Adding to a loop. We find the loop containing the dataname that we have been passed, and then append all of the (key,values) pairs that we are passed in data, which is a dictionary. We expect that the data have been sorted out for us, unlike when data are passed in AddCifItem, when there can be both unlooped and looped data in one set. The dataname passed to this routine is simply a convenient way to refer to the loop, and has no other significance.

<Add to looped data with validity checks>= (<-U)
def AddToLoop(self,dataname,loopdata):
    # single item checks
    paired_data = loopdata.items()
    for name,value in paired_data:
        valid,problems = self.single_item_check(name,value) 
        self.report_if_invalid(valid,problems)
    # loop item checks; merge with current loop
    found = 0
    for aloop in self.block["loops"]:
        if aloop.has_key(dataname):
            loopnames = aloop.keys()
            for new_name in loopdata.keys():
                if new_name not in loopnames: loopnames.append(new_name)
            valid,problems = self.looped_item_check(loopnames)
            self.report_if_invalid(valid,problems)
    prov_dict = loopdata.copy()
    for name,value in paired_data: 
        del prov_dict[name]   # remove temporarily
        valid,problems = self.global_item_check(name,value,prov_dict)
        prov_dict[name] = value  # add back in
        self.report_if_invalid(valid,problems)
    CifBlock.AddToLoop(self,dataname,loopdata)

Note that a dictionary must be specified in order to create a valid Cif file. This dictionary is then passed to any blocks. If they were already ValidCifBlocks, they will be reinitialised. Note that, as reading a dictionary takes time, we do it immediately to save doing it later.

As a convenience, we handle lists of filenames/CifFiles which are supposed to be dictionaries, and pass them directly to the ValidCifBlock object which will merge as necessary.

Note that we have to set bigdic before calling __init__. The various calls down through the inheritance hierarchy end up calling ValidCifBlock with self.bigdic as one of the arguments. Also, this __init__ procedure could be called from within StarFile.__init__ if given a filename to read from, so we allow that bigdic might already have been set - and check for its existence before setting it again!

<A valid CIF file>= (<-U)
class ValidCifFile(CifFile):
    """A CIF file for which all datablocks are valid.  Argument `dic` to
    initialisation specifies a `CifDic` object to use for validation."""
    <Initialise valid CIF>
    <Redefine add new block>

<Initialise valid CIF>= (<-U)
def __init__(self,dic=None,diclist=[],mergemode="replace",*args,**kwargs):
    if not diclist and not dic and not hasattr(self,'bigdic'):
        raise ValidCifError( "At least one dictionary is required to create a ValidCifFile object")
    if not dic and diclist:     #merge here for speed
        self.bigdic = merge_dic(diclist,mergemode)
    elif dic and not diclist:
        self.bigdic = dic
    CifFile.__init__(self,*args,**kwargs)
    for blockname in self.keys():
        self.dictionary[blockname]=ValidCifBlock(data=self.dictionary[blockname],dic=self.bigdic)

Whenever a new block is added, we have to additionally update our match array and perform a validation run. This definition shadows the definition in the parent class.

<Redefine add new block>= (<-U)
def NewBlock(self,blockname,blockcontents,**kwargs):
    CifFile.NewBlock(self,blockname,blockcontents,**kwargs)
    # dictionary[blockname] is now a CifBlock object.  We
    # turn it into a ValidCifBlock object
    self.dictionary[blockname] = ValidCifBlock(dic=self.bigdic,
                                     data=self.dictionary[blockname])

We provide some functions for straight validation. These serve as an example of the use of the CifDic class with the CifFile class.

<Top-level functions>= (<-U)
<ValidationResult class>
<Validate against the given dictionaries>
<Run dictionary validation checks>

A convenient wrapper class for dealing with the structure returned by validation. Perhaps a more elegant approach would be to return one of these objects from validation rather than wrap the validation routines inside.

<ValidationResult class>= (<-U)
class ValidationResult:
    """Represents validation result. It is initialised with """
    def __init__(self,results):
        """results is return value of validate function"""
        self.valid_result, self.no_matches = results

    def report(self,use_html):
        """Return string with human-readable description of validation result"""
        return validate_report((self.valid_result, self.no_matches),use_html)

    def is_valid(self,block_name=None):
        """Return True for valid CIF file, otherwise False"""
        if block_name is not None:
            block_names = [block_name]
        else:
            block_names = self.valid_result.iterkeys()
        for block_name in block_names:
            if not self.valid_result[block_name] == (True,{}):
                valid = False
                break
            else:
                valid = True
        return valid
    
    def has_no_match_items(self,block_name=None):
        """Return true if some items are not found in dictionary"""
        if block_name is not None:
            block_names = [block_name]
        else:
            block_names = self.no_matches.iter_keys() 
        for block_name in block_names:
            if self.no_matches[block_name]:
                has_no_match_items = True
                break
            else:
                has_no_match_items = False
        return has_no_match_items
    

        

We provide a function to do straight validation, using the built-in methods of the dictionary type. We need to create a single dictionary from the multiple dictionaries we are passed, before doing our check. Also, we allow validation of dictionaries themselves, by passing a special flag isdic. This should only be used for DDL2 dictionaries. DDL1 dictionaries validate OK if (any) global block is deleted.

<Validate against the given dictionaries>= (<-U)
def Validate(ciffile,dic = "", diclist=[],mergemode="replace",isdic=False,fake_mand=True):
    """Validate the `ciffile` conforms to the definitions in `CifDic` object `dic`, or if `dic` is missing,
    to the results of merging the `CifDic` objects in `diclist` according to `mergemode`.  Flag
    `isdic` indicates that `ciffile` is a CIF dictionary meaning that save frames should be
    accessed for validation and that mandatory_category should be interpreted differently for DDL2."""
    check_file = CifFile(ciffile)
    if not dic:
        fulldic = merge_dic(diclist,mergemode)
    else:
        fulldic = dic
    no_matches = {}
    valid_result = {}
    if isdic:          #assume one block only
        blockname = check_file.keys()[0]
        check_bc = check_file[blockname]["saves"]
        check_globals = check_file[blockname] 
        # collect a list of parents for speed
        poss_parents = fulldic.get_all("_item_linked.parent_name")
        for parent in poss_parents:
            curr_parent = listify(check_globals.get(parent,[]))
            new_vals = check_bc.get_all(parent)
            new_vals.extend(curr_parent)
            if len(new_vals)>0:
                check_globals[parent] = new_vals
                # print "Added %s (len %d)" % (parent,len(check_globals[parent]))
        # next dictionary problem: the main DDL2 dictionary has what
        # I would characterise as a mandatory_category problem, but
        # in order to gloss over it, we allow a different 
        # interpretation, which requires only a single check for one
        # block.
        if fake_mand:
            valid_result[blockname] = fulldic.find_prob_cats(check_globals)
            no_matches[blockname] = filter(lambda a:not fulldic.has_key(a),check_globals.keys())
    else:
        check_bc = check_file
        check_globals = CifBlock()   #empty
    for block in check_bc.keys(): 
        #print "Validating block %s" % block 
        no_matches[block] = filter(lambda a:not fulldic.has_key(a),check_bc[block].keys())
        # remove non-matching items
        # print "Not matched: " + `no_matches[block]`
        for nogood in no_matches[block]:
             del check_bc[block][nogood]
        valid_result[block] = run_data_checks(check_bc[block],fulldic,globals=check_globals,fake_mand=fake_mand)
    return valid_result,no_matches

def validate_report(val_result,use_html=False):
    import cStringIO
    valid_result,no_matches = val_result
    outstr = cStringIO.StringIO()
    if use_html:
        outstr.write("<h2>Validation results</h2>")
    else:
        outstr.write( "Validation results\n")
        outstr.write( "------------------\n")
    if len(valid_result) > 10:  
        suppress_valid = True         #don't clutter with valid messages
        if use_html:
           outstr.write("<p>For brevity, valid blocks are not reported in the output.</p>")
    else:
        suppress_valid = False
    for block in valid_result.keys():
        block_result = valid_result[block]
        if block_result[0]:
            out_line = "Block '%s' is VALID" % block
        else:
            out_line = "Block '%s' is INVALID" % block
        if use_html:
            if (block_result[0] and (not suppress_valid or len(no_matches[block])>0)) or not block_result[0]:
                outstr.write( "<h3>%s</h3><p>" % out_line)
        else:
                outstr.write( "\n %s\n" % out_line)
        if len(no_matches[block])!= 0:
            if use_html:
                outstr.write( "<p>The following items were not found in the dictionary")
                outstr.write(" (note that this does not invalidate the data block):</p>")
                outstr.write("<p><table>\n")
                map(lambda it:outstr.write("<tr><td>%s</td></tr>" % it),no_matches[block])
                outstr.write("</table>\n")
            else:
                outstr.write( "\n The following items were not found in the dictionary:\n")
                outstr.write("Note that this does not invalidate the data block\n")
                map(lambda it:outstr.write("%s\n" % it),no_matches[block])
        # now organise our results by type of error, not data item...
        error_type_dic = {}
        for error_item, error_list in block_result[1].items():
            for func_name,bad_result in error_list:
                bad_result.update({"item_name":error_item})
                try:
                    error_type_dic[func_name].append(bad_result)
                except KeyError:
                    error_type_dic[func_name] = [bad_result]
        # make a table of test name, test message
        info_table = {\
        'validate_item_type':\
            "The following data items had badly formed values",
        'validate_item_esd':\
            "The following data items should not have esds appended",
        'validate_enum_range':\
            "The following data items have values outside permitted range",
        'validate_item_enum':\
            "The following data items have values outside permitted set",
        'validate_looping':\
            "The following data items violate looping constraints",
        'validate_loop_membership':\
            "The following looped data names are of different categories to the first looped data name",
        'validate_loop_key':\
            "A required dataname for this category is missing from the loop\n containing the dataname",
        'validate_loop_references':\
            "A dataname required by the item is missing from the loop",
        'validate_parent':\
            "A parent dataname is missing or contains different values",
        'validate_child':\
            "A child dataname contains different values to the parent",
        'validate_uniqueness':\
            "One or more data items do not take unique values",
        'validate_dependents':\
            "A dataname required by the item is missing from the data block",
        'validate_exclusion': \
            "Both dataname and exclusive alternates or aliases are present in data block",
        'validate_mandatory_category':\
            "A required category is missing from this block"}

        for test_name,test_results in error_type_dic.items():
           if use_html:
               outstr.write(html_error_report(test_name,info_table[test_name],test_results)) 
           else:
               outstr.write(error_report(test_name,info_table[test_name],test_results)) 
               outstr.write("\n\n")
    return outstr.getvalue()
         
# A function to lay out a single error report.  We are passed
# the name of the error (one of our validation functions), the
# explanation to print out, and a dictionary with the error 
# information.  We print no more than 50 characters of the item

def error_report(error_name,error_explanation,error_dics):
   retstring = "\n\n " + error_explanation + ":\n\n"
   headstring = "%-32s" % "Item name"
   bodystring = ""
   if error_dics[0].has_key("bad_values"):
      headstring += "%-20s" % "Bad value(s)"
   if error_dics[0].has_key("bad_items"):
      headstring += "%-20s" % "Bad dataname(s)"
   if error_dics[0].has_key("child"):
      headstring += "%-20s" % "Child"
   if error_dics[0].has_key("parent"):
      headstring += "%-20s" % "Parent" 
   headstring +="\n"
   for error in error_dics:
      bodystring += "\n%-32s" % error["item_name"]
      if error.has_key("bad_values"):
          out_vals = map(lambda a:a[:50],error["bad_values"])
          bodystring += "%-20s" % out_vals 
      if error.has_key("bad_items"):
          bodystring += "%-20s" % error["bad_items"]
      if error.has_key("child"):
          bodystring += "%-20s" % error["child"]
      if error.has_key("parent"):
          bodystring += "%-20s" % error["parent"]
   return retstring + headstring + bodystring 

#  This lays out an HTML error report

def html_error_report(error_name,error_explanation,error_dics,annotate=[]):
   retstring = "<h4>" + error_explanation + ":</h4>"
   retstring = retstring + "<table cellpadding=5><tr>"
   headstring = "<th>Item name</th>"
   bodystring = ""
   if error_dics[0].has_key("bad_values"):
      headstring += "<th>Bad value(s)</th>"
   if error_dics[0].has_key("bad_items"):
      headstring += "<th>Bad dataname(s)</th>"
   if error_dics[0].has_key("child"):
      headstring += "<th>Child</th>"
   if error_dics[0].has_key("parent"):
      headstring += "<th>Parent</th>" 
   headstring +="</tr>\n"
   for error in error_dics:
      bodystring += "<tr><td><tt>%s</tt></td>" % error["item_name"]
      if error.has_key("bad_values"):
          bodystring += "<td>%s</td>" % error["bad_values"]
      if error.has_key("bad_items"):
          bodystring += "<td><tt>%s</tt></td>" % error["bad_items"]
      if error.has_key("child"):
          bodystring += "<td><tt>%s</tt></td>" % error["child"]
      if error.has_key("parent"):
          bodystring += "<td><tt>%s</tt></td>" % error["parent"]
      bodystring += "</tr>\n"
   return retstring + headstring + bodystring + "</table>\n"

This function executes validation checks provided in the CifDic. The validation calls create a dictionary containing the test results for each item name. Each item has a list of (test name,result) tuples. After running the tests, we contract these lists to contain only false results, and then remove all items containing no false results.

<Run dictionary validation checks>= (<-U)
def run_data_checks(check_block,fulldic,globals={},fake_mand=False):
    v_result = {}
    for key in check_block.keys():
        update_value(v_result, fulldic.run_item_validation(key,check_block[key]))
        update_value(v_result, fulldic.run_global_validation(key,check_block[key],check_block,globals=globals))
    for loopnames in check_block.loops.values():
        update_value(v_result, fulldic.run_loop_validation(loopnames))
    update_value(v_result,fulldic.run_block_validation(check_block,globals=globals,fake_mand=fake_mand))
    # return false and list of baddies if anything didn't match
    for test_key in v_result.keys():
        v_result[test_key] = filter(lambda a:a[1]["result"]==False,v_result[test_key])
        if len(v_result[test_key]) == 0: 
            del v_result[test_key]
    # if even one false one is found, this should trigger
    # print "Baddies:" + `v_result`
    isvalid = len(v_result)==0
    return isvalid,v_result
    
<Utility functions>= (<-U)
<Extract number and esd>
<Convert value to float, ignore esd>
<Conversions to dictionary types>
<Append update>
<Transpose data>
<Merge dictionaries as CIFs>
<Get topmost parent>

This support function uses re capturing to work out the number's value. The re contains 7 groups: group 0 is the entire expression; group 1 is the overall match in the part prior to esd brackets; group 2 is the match with a decimal point, group 3 is the digits after the decimal point, group 4 is the match without a decimal point. Group 5 is the esd bracket contents, and group 6 is the exponent.

The esd should be returned as an independent number. We count the number of digits after the decimal point, create the esd in terms of this, and then, if necessary, apply the exponent.

<Extract number and esd>= (<-U <-U)
def get_number_with_esd(numstring):
    import string
    numb_re = '((-?(([0-9]*[.]([0-9]+))|([0-9]+)[.]?))([(][0-9]+[)])?([eEdD][+-]?[0-9]+)?)|(\?)|(\.)' 
    our_match = re.match(numb_re,numstring)
    if our_match:
        a,base_num,b,c,dad,dbd,esd,exp,q,dot = our_match.groups()
    #    print "Debug: %s -> %s" % (numstring, `our_match.groups()`)
    else:
        return None,None
    if dot or q: return None,None     #a dot or question mark
    if exp:          #has exponent 
       exp = string.replace(exp,"d","e")     # mop up old fashioned numbers
       exp = string.replace(exp,"D","e")
       base_num = base_num + exp
    #print "Debug: have %s for base_num from %s" % (base_num,numstring)
    base_num = float(base_num)
    # work out esd, if present.
    if esd:
        esd = float(esd[1:-1])    # no brackets
        if dad:                   # decimal point + digits
            esd = esd * (10 ** (-1* len(dad)))
        if exp:
            esd = esd * (10 ** (float(exp[1:])))
    return base_num,esd

For dREl operations we require that all numerical types actually appear as numerical types rather than strings. This function takes a datablock and a dictionary and converts all the datablock contents to numerical values according to the dictionary specifications.

Note that as written we are happy to interpret a floating point string as an integer. We are therefore assuming that the value has been validated.

<Conversions to dictionary types>= (<-U)
<Overall conversion>
<Convert a single value>
<Convert a list value>
<Convert a matrix value>
<Parse the structure specification>

Instead of returning a value, we return a function that can be used to convert the values. This saves time reconstructing the conversion function for every value in a loop.

<Overall conversion>= (<-U)
def convert_type(definition):
    """Convert value to have the type given by definition"""
    #extract the actual required type information
    container = definition['_type.container']
    dimension = definition.get('_type.dimension',StarFile.StarList([]))
    structure = interpret_structure(definition['_type.contents'])
    if container == 'Single':   #a single value to convert
        return convert_single_value(structure)
    elif container == 'List':   #lots of the same value
        return convert_list_values(structure,dimension)
    elif container == 'Multiple': #no idea 
        return None
    elif container in ('Array','Matrix'): #numpy array
        return convert_matrix_values(structure)
    return lambda a:a    #unable to convert

<Convert a single value>= (<-U)
def convert_single_value(type_spec):
    """Convert a single item according to type_spec"""
    if type_spec == 'Real':
        return float_with_esd
    if type_spec in ('Count','Integer','Index','Binary','Hexadecimal','Octal'):
        return int
    if type_spec == 'Complex':
        return complex
    if type_spec == 'Imag':
        return lambda a:complex(0,a)
    if type_spec in ('Code','Name','Tag'):  #case-insensitive -> lowercase
        return lambda a:a.lower()
    return lambda a:a   #can't do anything numeric
    

Convert a whole DDLm list. A 'List' type implies a repetition of the types given in the 'type.contents' entry. We get all fancy and build a function to decode each entry in our input list. This function is then mapped over the List, and in the case of looped List values, it can be mapped over the dataname value as well. However, in the case of a single repetition, files are allowed to drop one level of enclosing brackets. We account for that here by detecting a one-element list and *not* mapping the conversion function. TODO: Note that we do not yet handle the case that we are supposed to convert to a Matrix, rather than a list. TODO: handle arbitrary dimension lists, rather than special-casing the character sequence '[1]'.

<Convert a list value>= (<-U)
def convert_list_values(structure,dimension):
    """Convert the values according to the element
       structure given in [[structure]]"""
    if isinstance(structure,basestring):   #simple repetition
        func_def =  "element_convert = convert_single_value('%s')" % structure
    else:
        func_def =       "def element_convert(element):\n"
        func_def +=      "   final_val = []\n"   
        for pos_no in range(len(structure)):
            func_def +=  "   final_val.append("
            type_spec = structure[pos_no]
            if type_spec == 'Real':
                cf = "float_with_esd("
            elif type_spec in ('Count','Integer','Index','Binary','Hexadecimal','Octal'):
                cf = 'int('
            elif type_spec == 'Complex':
                cf = 'complex('
            elif type_spec == 'Imag':
                cf = 'complex(0,'
            elif type_spec in ('Code','Name','Tag'):
                cf = '('
            else: cf = ''
            func_def += cf
            func_def += "element[%d]" % pos_no
            if "(" in cf: func_def +=")"
            if type_spec in ('Code','Name','Tag'):
                func_def +=".lower()"
            func_def +=")\n"  # close append
        func_def +=      "   return final_val\n"
    print func_def
    exec func_def in globals() #(re)defines element_convert in global namespace
    if len(dimension)> 0 and int(dimension[0]) != 1:
        return lambda a: map(element_convert,a)
    else: return element_convert

When storing a matrix/array value as a result of a calculation, we remove the numpy information and instead store as a StarList. The following routine will work transparently for either string or number-valued Star Lists, so we don't have to worry.

<Convert a matrix value>= (<-U)
def convert_matrix_values(valtype):
    """Convert a dREL String or Float valued List structure to a numpy matrix structure"""
    # first convert to numpy array, then let numpy do the work
    try: import numpy
    except:
        return lambda a:a   #can't do it
    func_def =     "def matrix_convert(a):\n"
    func_def +=    "    import numpy\n"
    func_def +=    "    p = numpy.array(a)\n"
    if valtype == 'Real':
        func_def+= "    return p.astype('float')\n"
    elif valtype == 'Integer':
        func_def +="    return p.astype('int')\n"
    elif valtype == 'Complex':
        func_def +="    return p.astype('complex')\n"
    else:
        raise ValueError, 'Unknown matrix value type'
    exec func_def  #matrix convert is defined
    return matrix_convert    
        

DDLm specifies List element composition using a notation of form 'cont(el,el,el...)' where 'cont' refers to a container constructor (list or matrix so far) and 'el' is a simple element type. If 'cont' is missing, the sequence of elements is a sequence of elements in a simple list. We have written a simple parser to interpret this.

<Parse the structure specification>= (<-U)
def interpret_structure(struc_spec):
    """Interpret a DDLm structure specification"""
    import TypeContentsParser as t
    p = t.TypeParser(t.TypeParserScanner(struc_spec))
    return getattr(p,"input")()
    
<Append update>= (<-U)
# A utility function to append to item values rather than replace them
def update_value(base_dict,new_items):
    for new_key in new_items.keys():
        if base_dict.has_key(new_key):
            base_dict[new_key].extend(new_items[new_key])
        else:
            base_dict[new_key] = new_items[new_key]

<Transpose data>= (<-U)
#Transpose the list of lists passed to us
def transpose(base_list):
    new_lofl = []
    full_length = len(base_list)
    opt_range = range(full_length)
    for i in range(len(base_list[0])):
       new_packet = [] 
       for j in opt_range:
          new_packet.append(base_list[j][i])
       new_lofl.append(new_packet)
    return new_lofl

# listify strings - used surprisingly often
def listify(item):
    if isinstance(item,basestring): return [item]
    else: return item

# given a list of search items, return a list of items 
# actually contained in the given data block
def filter_present(namelist,datablocknames):
    return filter(lambda a:a in datablocknames,namelist)

This uses the CifFile merge method to merge a list of filenames, with an initial check to determine DDL1/DDL2 merge style. In one case we merge save frames in a single block, in another case we merge data blocks. These are different levels.

Note that the data block name is passed to specify the parts of each object to be merged, rather than the objects themselves (not doing this was a bug that was caught a while ago).

<Merge dictionaries as CIFs>= (<-U)
# merge ddl dictionaries.  We should be passed filenames or CifFile
# objects
def merge_dic(diclist,mergemode="replace",ddlspec=None):
    dic_as_cif_list = []
    for dic in diclist:
        if not isinstance(dic,CifFile) and \
           not isinstance(dic,basestring):
               raise TypeError, "Require list of CifFile names/objects for dictionary merging"
        if not isinstance(dic,CifFile): dic_as_cif_list.append(CifFile(dic))
        else: dic_as_cif_list.append(dic)
    # we now merge left to right
    basedic = dic_as_cif_list[0]
    if basedic.has_key("on_this_dictionary"):   #DDL1 style only
        for dic in dic_as_cif_list[1:]:
           basedic.merge(dic,mode=mergemode,match_att=["_name"])
    elif len(basedic.keys()) == 1:                     #One block: DDL2 style
        old_block = basedic[basedic.keys()[0]]
        for dic in dic_as_cif_list[1:]:
           new_block = dic[dic.keys()[0]]
           basedic.merge(dic,mode=mergemode,
                         single_block=[basedic.keys()[0],dic.keys()[0]],
                         match_att=["_item.name"],match_function=find_parent)
    return CifDic(basedic)

Find the main item from a parent-child list. We are asked to find the topmost parent in a ddl2 definition block containing multiple item.names. We use the insight that the parent item will be that item which is not in the list of children as well. If there are no item names, that means that we are dealing with something like a category -can they be merged??

<Get topmost parent>= (<-U)
def find_parent(ddl2_def):
    if not ddl2_def.has_key("_item.name"):
       return None 
    if isinstance(ddl2_def["_item.name"],basestring):
        return ddl2_def["_item.name"]
    if not ddl2_def.has_key("_item_linked.child_name"):
        raise CifError("Asked to find parent in block with no child_names")
    if not ddl2_def.has_key("_item_linked.parent_name"):
        raise CifError("Asked to find parent in block with no parent_names")
    result = filter(lambda a:a not in ddl2_def["_item_linked.child_name"],ddl2_def["_item.name"]) 
    if len(result)>1 or len(result)==0:
        raise CifError("Unable to find single unique parent data item")
    return result[0]

Cif Loop block class

With the removal (by PyCIFRW) of nested loops, this class is now unnecessary. It is now simply a pointer to StarFile.LoopBlock.

<CifLoopBlock class>= (<-U)
class CifLoopBlock(StarFile.LoopBlock):
    def __init__(self,data=(),**kwargs):
        super(CifLoopBlock,self).__init__(data,**kwargs)

<API documentation flags>= (<-U)
#No documentation flags