<Copyright statement>= (U->)
__copyright = """
PYCIFRW License Agreement (Python License, Version 2)
-----------------------------------------------------

1. This LICENSE AGREEMENT is between the Australian Nuclear Science
and Technology Organisation ("ANSTO"), and the Individual or
Organization ("Licensee") accessing and otherwise using this software
("PyCIFRW") in source or binary form and its associated documentation.

2. Subject to the terms and conditions of this License Agreement,
ANSTO hereby grants Licensee a nonexclusive, royalty-free, world-wide
license to reproduce, analyze, test, perform and/or display publicly,
prepare derivative works, distribute, and otherwise use PyCIFRW alone
or in any derivative version, provided, however, that this License
Agreement and ANSTO's notice of copyright, i.e., "Copyright (c)
2001-2014 ANSTO; All Rights Reserved" are retained in PyCIFRW alone or
in any derivative version prepared by Licensee.

3. In the event Licensee prepares a derivative work that is based on
or incorporates PyCIFRW or any part thereof, and wants to make the
derivative work available to others as provided herein, then Licensee
hereby agrees to include in any such work a brief summary of the
changes made to PyCIFRW.

4. ANSTO is making PyCIFRW available to Licensee on an "AS IS"
basis. ANSTO MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, ANSTO MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYCIFRW WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.

5. ANSTO SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYCIFRW
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A
RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYCIFRW, OR ANY
DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.

6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.

7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between ANSTO
and Licensee. This License Agreement does not grant permission to use
ANSTO trademarks or trade name in a trademark sense to endorse or
promote products or services of Licensee, or any third party.

8. By copying, installing or otherwise using PyCIFRW, Licensee agrees
to be bound by the terms and conditions of this License Agreement.

"""

Introduction

This file implements a general CIF reading/writing utility. The basic objects (CifFile/CifBlock) read and write syntactically correct CIF 1.1 files including save frames. Objects for validating CIFs are built on these basic objects: A CifDic object is derived from a CifFile created from a DDL1/2 dictionary; and the ValidCifFile/ValidCifBlock objects allow creation/checking of CIF files against a list of CIF dictionaries.

The CifFile class is initialised with either no arguments (a new CIF file) or with the name of an already existing CIF file. Data items are accessed/changed/added using the python mapping type ie to get dataitem you would type value = cf[blockname][dataitem].

Note also that a CifFile object can be accessed as a mapping type, ie using square brackets. Most mapping operations have been implemented (see below).

We build upon the objects defined in the StarFile class, by imposing a few extra restrictions where necessary.

<*>=
# To maximize python3/python2 compatibility
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import

try:
    from cStringIO import StringIO
except ImportError:
    from io import StringIO

# Python 2,3 compatibility
try:
    from urllib import urlopen         # for arbitrary opening
    from urlparse import urlparse, urljoin
except:
    from urllib.request import urlopen
    from urllib.parse import urlparse, urljoin

# The unicode type does not exist in Python3 as the str type
# encompasses unicode.  PyCIFRW tests for 'unicode' would fail
# Suggestions for a better approach welcome.
#
# long type no longer exists in Python3, so we alias to int
#
if isinstance(u"abc",str):   #Python3
    unicode = str
    long = int

<Copyright statement>

import re,sys
from . import StarFile
from .StarFile import StarList  #put in global scope for exec statement
try:
    import numpy                   #put in global scope for exec statement
    from .drel import drel_runtime  #put in global scope for exec statement
except ImportError:
    pass                       #will fail when using dictionaries for calcs
from copy import copy          #must be in global scope for exec statement

<Decorators>
<CifBlock class>
<CifFile class>
<Define an error class>
<CIF Dictionary type>
<A valid CIF block>
<A valid CIF file>
<Top-level functions>
<Utility functions>
<Read in a CIF file>
<CifLoopBlock class>
<API documentation flags>

CifFile

A CifFile is subclassed from a StarFile. Our StarFile class has an optional check of line length, which we use.

A CifFile object is a dictionary of CifBlock objects, accessed by block name. As the maximum line length is subject to change, we allow the length to be specified, with the current default set at 2048 characters (Cif 1.1). For reading in files, we only flag a length error if the parameter strict is true, in which case we use parameter maxinlength as our maximum line length on input. Parameter maxoutlength sets the maximum line size for output. If maxoutlength is not specified, it defaults to the maximum input length.

Note that this applies to the input only. For changing output length, you can provide an optional parameter in the WriteOut method.

<CifFile class>= (<-U)
class CifFile(StarFile.StarFile):
<Initialise data structures>

When initialising, we add those parts that are unique to the CifFile as opposed to a simple collection of blocks - i.e. reading in from a file, and some line length restrictions. We do not indent this section in this noweb file, so that our comment characters output at the beginning of the line.

<Initialise data structures>= (<-U)
    def __init__(self,datasource=None,strict=1,standard='CIF',**kwargs):
        super(CifFile,self).__init__(datasource=datasource,standard=standard, **kwargs)
        self.strict = strict
        self.header_comment = \
"""
##########################################################################
#               Crystallographic Information Format file
#               Produced by PyCifRW module
#
#  This is a CIF file.  CIF has been adopted by the International
#  Union of Crystallography as the standard for data archiving and
#  transmission.
#
#  For information on this file format, follow the CIF links at
#  http://www.iucr.org
##########################################################################
"""

Cif Block class

CifBlocks exist(ed) as a separate class in order to enforce non-nested loops and maximum dataname lengths. As nested loops have been removed completely from PyCIFRW, they are no longer necessary but kept here for backwards compatibility.

<CifBlock class>= (<-U)
class CifBlock(StarFile.StarBlock):
    """
    A class to hold a single block of a CIF file.  A `CifBlock` object can be treated as
    a Python dictionary, in particular, individual items can be accessed using square
    brackets e.g. `b['_a_dataname']`.  All other Python dictionary methods are also
    available (e.g. `keys()`, `values()`).  Looped datanames will return a list of values.

    ## Initialisation

    When provided, `data` should be another `CifBlock` whose contents will be copied to
    this block.

    * if `strict` is set, maximum name lengths will be enforced

    * `maxoutlength` is the maximum length for output lines

    * `wraplength` is the ideal length to make output lines

    * When set, `overwrite` allows the values of datanames to be changed (otherwise an error
    is raised).

    * `compat_mode` will allow deprecated behaviour of creating single-dataname loops using
    the syntax `a[_dataname] = [1,2,3,4]`.  This should now be done by calling `CreateLoop`
    after setting the dataitem value.
    """
    <Initialise Cif Block>
    <Adjust emulation of a mapping type>
    <Add a data item>
    <Return all looped names>

A CifBlock is a StarBlock with a very few restrictions.

<Initialise Cif Block>= (<-U)
def __init__(self,data = (), strict = 1, compat_mode=False, **kwargs):
    """When provided, `data` should be another CifBlock whose contents will be copied to
    this block.

    * if `strict` is set, maximum name lengths will be enforced

    * `maxoutlength` is the maximum length for output lines

    * `wraplength` is the ideal length to make output lines

    * When set, `overwrite` allows the values of datanames to be changed (otherwise an error
    is raised).

    * `compat_mode` will allow deprecated behaviour of creating single-dataname loops using
    the syntax `a[_dataname] = [1,2,3,4]`.  This should now be done by calling `CreateLoop`
    after setting the dataitem value.
    """
    if strict: maxnamelength=75
    else:
       maxnamelength=-1
    super(CifBlock,self).__init__(data=data,maxnamelength=maxnamelength,**kwargs)
    self.dictionary = None   #DDL dictionary referring to this block
    self.compat_mode = compat_mode   #old-style behaviour of setitem

def RemoveCifItem(self,itemname):
    """Remove `itemname` from the CifBlock"""
    self.RemoveItem(itemname)

The second line in the copy method switches the class of the returned object to be a CifBlock. It may not be necessary.

<Adjust emulation of a mapping type>= (<-U)
def __setitem__(self,key,value):
    self.AddItem(key,value)
    # for backwards compatibility make a single-element loop
    if self.compat_mode:
        if isinstance(value,(tuple,list)) and not isinstance(value,StarFile.StarList):
             # single element loop
             self.CreateLoop([key])

def copy(self):
    newblock = super(CifBlock,self).copy()
    return self.copy.im_class(newblock)   #catch inheritance

This function was added for the dictionary validation routines. It will return a list where each member is itself a list of item names, corresponding to the names in each loop of the file.

<Return all looped names>= (<-U)
def loopnames(self):
    return [self.loops[a] for a in self.loops]

Adding a data item. In the old, deprecated method we are passed a tuple with the (set) of data names at the beginning, and a (set) of values for them following.

We implement this behaviour by looping over the input datanames, and adding them to the set of keys. When we have finished, we create the loop.

We check the length of the name, and give an error if the name is greater than 75 characters, which is the CIF 1.1 maximum length.

We also check for consistency, by making sure the new item is not in the block already. If it is, we replace it (consistent with the meaning of square brackets). If it is in a loop, we replace the looped value and all other items in that loop block. This means that when adding loops, we must add them all at once if we call this routine directly.

We typecheck the data items. They can be tuples, strings or lists. If we have a list of values for a single item, the item name should also occur in a single member tuple.

<Add a data item>= (<-U)
def AddCifItem(self,data):
    """ *DEPRECATED*. Use `AddItem` instead."""
    # we accept only tuples, strings and lists!!
    if not (isinstance(data[0],(unicode,tuple,list,str))):
              raise TypeError('Cif datanames are either a string, tuple or list')
    # we catch single item loops as well...
    if isinstance(data[0],(unicode,str)):
        self.AddSingleCifItem(data[0],list(data[1]))
        if isinstance(data[1],(tuple,list)) and not isinstance(data[1],StarFile.StarList):  # a single element loop
            self.CreateLoop([data[0]])
        return
    # otherwise, we loop over the datanames
    keyvals = zip(data[0][0],[list(a) for a in data[1][0]])
    [self.AddSingleCifItem(a,b) for a,b in keyvals]
    # and create the loop
    self.CreateLoop(data[0][0])

def AddSingleCifItem(self,key,value):
    """*Deprecated*. Use `AddItem` instead"""
    """Add a single data item. If it is part of a loop, a separate call should be made"""
    self.AddItem(key,value)

Reading in a file. We use the STAR grammar parser. Note that the blocks returned will be locked for changing (overwrite=False) and can be unlocked by setting block.overwrite to True.

<Read in a CIF file>= (<-U)
def ReadCif(filename,grammar='auto',scantype='standard',scoping='instance',standard='CIF',
            permissive=False):
    """ Read in a CIF file, returning a `CifFile` object.

    * `filename` may be a URL, a file
    path on the local system, or any object with a `read` method.

    * `grammar` chooses the CIF grammar variant. `1.0` is the original 1992 grammar and `1.1`
    is identical except for the exclusion of square brackets as the first characters in
    undelimited datanames. `2.0` will read files in the CIF2.0 standard, and `STAR2` will
    read files according to the STAR2 publication.  If grammar is `None`, autodetection
    will be attempted in the order `2.0`, `1.1` and `1.0`. This will always succeed for
    properly-formed CIF2.0 files.  Note that only Unicode characters in the basic multilingual
    plane are recognised (this will be fixed when PyCIFRW is ported to Python 3).

    * `scantype` can be `standard` or `flex`.  `standard` provides pure Python parsing at the
    cost of a factor of 10 or so in speed.  `flex` will tokenise the input CIF file using
    fast C routines, but is not available for CIF2/STAR2 files.  Note that running PyCIFRW in
    Jython uses native Java regular expressions
    to provide a speedup regardless of this argument (and does not yet support CIF2).

    * `scoping` is only relevant where nested save frames are expected (STAR2 only).
    `instance` scoping makes nested save frames
    invisible outside their hierarchy, allowing duplicate save frame names in separate
    hierarchies. `dictionary` scoping makes all save frames within a data block visible to each
    other, thereby restricting all save frames to have unique names.
    Currently the only recognised value for `standard` is `CIF`, which when set enforces a
    maximum length of 75 characters for datanames and has no other effect. """

    finalcif = CifFile(scoping=scoping,standard=standard)
    return StarFile.ReadStar(filename,prepared=finalcif,grammar=grammar,scantype=scantype,
                             permissive=permissive)
    #return StarFile.StarFile(filename,maxlength,scantype=scantype,grammar=grammar,**kwargs)

Defining an error class: we simply derive a 'nothing' class from the root Python class

<Define an error class>= (<-U)
class CifError(Exception):
    def __init__(self,value):
        self.value = value
    def __str__(self):
        return '\nCif Format error: '+ self.value

class ValidCifError(Exception):
    def __init__(self,value):
        self.value = value
    def __str__(self):
        return '\nCif Validity error: ' + self.value

class CifRecursionError(Exception):
    def __init__(self,key_value,call_stack):
        self.key_value = key_value
        self.call_stack = call_stack
    def __str__(self):
        return "Derivation has recursed, %s seen twice (call stack %s)" % (self.key_value,repr(self.call_stack))


Dictionaries

To avoid ambiguity with the Python dictionary type, we use capital D to denote CIF Dictionaries where misinterpretation is possible.

We build our Dictionary behaviour on top of the StarFile object, which is notionally a collection of StarBlocks. A Dictionary is simply a collection of datablocks, where each datablock corresponds to a single definition. DDL1 had no category definitions.

We adopt a data model whereby the excess information in a DDL2 dictionary is absorbed into special methods (and I am thinking here of the _item_type_list.construct stuff which appears at the global level), which we initialise ourselves for a DDL1 dictionary.

The square bracket notation is repurposed to mean access to the appropriate definition, as the save frame name and the definition may be slightly (or completely) different.

<CIF Dictionary type>= (<-U)
<Dictionary block type>
class CifDic(StarFile.StarFile):
    """Create a Cif Dictionary object from the provided source, which can
    be a filename/URL or a CifFile.  Optional arguments (relevant to DDLm
    only):

    * do_minimum (Boolean):
         Do not set up the dREL system for auto-calculation or perform
         imports.  This implies do_imports=False and do_dREL=False

    * do_imports = No/Full/Contents/All:
         If not 'No', intepret _import.get statements for
         Full mode/Contents mode/Both respectively. See also option 'heavy'

    * do_dREL = True/False:
         Parse and convert all dREL methods to Python. Implies do_imports=All

    * heavy = True/False:
         (Experimental). If True, importation overwrites definitions. If False,
         attributes are resolved dynamically.
    """
    <Initialise Cif dictionary>
    <DDL-specific initialisation routines>
    <Repurpose standard python methods>
    <Repurpose Starfile methods>
    <Obtaining semantic information from the dictionary>
    <Create alias table>
    <Create category/object table>
    <Add type information>
    <Add category information>
    <Definition manipulation methods>
    <Dictionary output routines>
    <Return a single packet by key>
    <Extract number and esd>
    <Analyse range>
    <Linkage to dREL>
    <Switch on numpy arrays>
    <Convert string to appropriate type>
    <Validation routines>

Dictionary blocks

A dictionary block is essentially identical to a StarBlock, with the extra semantics of chasing through `_import.get` calls in order to transparently return attributes defined in separate dictionaries. If the `_import_cache` is empty, this is skipped.

<Dictionary block type>= (<-U)
class DicBlock(StarFile.StarBlock):
    """A definition block within a dictionary, which allows imports
    to be transparently followed"""

    def __init__(self,*args,**kwargs):
        super(DicBlock,self).__init__(*args,**kwargs)
        self._import_cache = {}
        
    def __getitem__(self,dataname):
        value = None
        if super(DicBlock,self).has_key("_import.get") and self._import_cache:
            value = self.follow_import(super(DicBlock,self).__getitem__("_import.get"),dataname) 
        try:
            final_value = super(DicBlock,self).__getitem__(dataname)
        except KeyError:    #not there
            final_value = value
        if final_value is None:
            raise KeyError("%s not found" % dataname)
        return final_value

    def has_key(self,key):
        try:
            self[key]
        except KeyError:
            return False
        return True
    
    def add_dict_cache(self,name,cached):
        """Add a loaded dictionary to this block's cache"""
        self._import_cache[name]=cached
        
    def follow_import(self,import_info,dataname):
        """Find the dataname values from the imported dictionary. `import_info`
        is a list of import locations"""
        latest_value = None
        for import_ref in import_info:
            file_loc = import_ref["file"]
            if file_loc not in self._import_cache:
                raise ValueError("Dictionary for import %s not found" % file_loc)
            import_from = self._import_cache[file_loc]
            miss = import_ref.get('miss','Exit')
            target_key = import_ref["save"]
            try:
                import_target = import_from[target_key]
            except KeyError:
                if miss == 'Exit':
                    raise CifError('Import frame %s not found in %s' % (target_key,file_loc))
                else: continue
            # now import appropriately
            mode = import_ref.get("mode",'Contents').lower()
            if mode == "contents":   #only this is used at this level
                latest_value = import_target.get(dataname,latest_value)
        return latest_value
    

Initialisation

We want to be able to accept strings, giving the file name of the CIF dictionary, and pre-initialised CifFile objects. We do not accept CifDic objects. Our initialisation procedure first unifies the interface to the Dictionary, and then runs through the Dictionary producing a normalised form. Following this, type and category information can be collected for later reference.

Validation functions are listed so that it would be possible to add and remove them from the "valid set". This behaviour has not yet been implemented.

When loading DDLm dictionaries we may recursively call this initialisation function with a dictionary to be imported as the argument. In this case we do not want to do all the method derivation, as the necessary categories will be loaded into the calling dictionary rather than the currently initialising dictionary. So there is a keyword argument to stop the operations that should operate on the dictionary as a whole taking place.

The dREL methods require Numpy support, but we do not wish to introduce a global dependence on Numpy. Therefore, we introduce a 'switch' which will return Numpy arrays from the __getitem__ method instead of StarLists. It is intended that the dREL methods will turn this on only during execution, then turn it off afterwards.

Note that DDLm importation logic provides many choices. We have a choice of 'No', 'Contents', 'Full' and 'All' for the amount that is imported. If `heavy` is False, no definition material will be replaced, rather the import will be resolved dynamically.

<Initialise Cif dictionary>= (<-U)
def __init__(self,dic,do_minimum=False,do_imports='All', do_dREL=True,
             grammar='auto',heavy=True,**kwargs):
    self.do_minimum = do_minimum
    if do_minimum:
        do_imports = 'No'
        do_dREL = False
    if do_dREL: do_imports = 'All'
    if heavy == 'Light' and do_imports not in ('contents','No'):
        raise(ValueError,"Light imports only available for mode 'contents'")
    self.template_cache = {}    #for DDLm imports
    self.ddlm_functions = {}    #for DDLm functions
    self.switch_numpy(False)    #no Numpy arrays returned
    super(CifDic,self).__init__(datasource=dic,grammar=grammar,blocktype=DicBlock,**kwargs)
    self.standard = 'Dic'    #for correct output order
    self.scoping = 'dictionary'
    (self.dicname,self.diclang) = self.dic_determine()
    print('%s is a %s dictionary' % (self.dicname,self.diclang))
    self.scopes_mandatory = {}
    self.scopes_naughty = {}
    self._import_dics = []   #Non-empty for DDLm only
    # rename and expand out definitions using "_name" in DDL dictionaries
    if self.diclang == "DDL1":
        self.DDL1_normalise()   #this removes any non-definition entries
    self.create_def_block_table() #From now on, [] uses definition_id
    if self.diclang == "DDL1":
        self.ddl1_cat_load()
    elif self.diclang == "DDL2":
        self.DDL2_normalise()   #iron out some DDL2 tricky bits
    elif self.diclang == "DDLm":
        self.scoping = 'dictionary'   #expose all save frames
        if do_imports is not 'No':
            self.obtain_imports(import_mode=do_imports,heavy=heavy)#recursively calls this routine
        self.create_alias_table()
        self.create_cat_obj_table()
        self.create_cat_key_table()
        if do_dREL:
            print('Doing full dictionary initialisation')
            self.initialise_drel()
    self.add_category_info(full=do_dREL)
    # initialise type information
    self.typedic={}
    self.primdic = {}   #typecode<->primitive type translation
    self.add_type_info()
    self.install_validation_functions()

These routines seek to impose a uniform structure on dictionaries written in DDL1, DDL2 and DDLm. Historically, the richer and more systematic DDL2 approach was used to describe DDL1 definitions. With the advent of DDLm, the DDLm paradigm is likely to overtake DDL2. When interpreting the following routines, therefore, bear in mind that they were originally written with DDL2 in mind, and are gradually shifting to DDLm.

<DDL-specific initialisation routines>= (<-U)
<Dictionary determination function>
<Deal with DDL1 differences>
<Load categories with DDL2-type information>
<Iron out DDL2 strangeness>
<Parse DDLm validity information>
<Perform DDLm imports>

This function determines whether we have a DDLm, DDL2 or DDL1 dictionary. We are built from a CifFile object. The current method looks for an on_this_dictionary block, which implies DDL1, or a single block, which implies DDL2/DDLM. This is also where we define some universal keys for uniform access to DDL attributes.

<Dictionary determination function>= (<-U)
def dic_determine(self):
    if "on_this_dictionary" in self:
        self.master_block = super(CifDic,self).__getitem__("on_this_dictionary")
        self.def_id_spec = "_name"
        self.cat_id_spec = "_category.id"   #we add this ourselves
        self.type_spec = "_type"
        self.enum_spec = "_enumeration"
        self.cat_spec = "_category"
        self.esd_spec = "_type_conditions"
        self.must_loop_spec = "_list"
        self.must_exist_spec = "_list_mandatory"
        self.list_ref_spec = "_list_reference"
        self.key_spec = "_list_mandatory"
        self.unique_spec = "_list_uniqueness"
        self.child_spec = "_list_link_child"
        self.parent_spec = "_list_link_parent"
        self.related_func = "_related_function"
        self.related_item = "_related_item"
        self.primitive_type = "_type"
        self.dep_spec = "xxx"
        self.cat_list = []   #to save searching all the time
        name = super(CifDic,self).__getitem__("on_this_dictionary")["_dictionary_name"]
        version = super(CifDic,self).__getitem__("on_this_dictionary")["_dictionary_version"]
        return (name+version,"DDL1")
    elif len(self.get_roots()) == 1:              # DDL2/DDLm
        self.master_block = super(CifDic,self).__getitem__(self.get_roots()[0][0])
        # now change to dictionary scoping
        self.scoping = 'dictionary'
        name = self.master_block["_dictionary.title"]
        version = self.master_block["_dictionary.version"]
        if self.master_block.has_key("_dictionary.class"):   #DDLm
            self.enum_spec = '_enumeration_set.state'
            self.key_spec = '_category.key_id'
            self.must_exist_spec = None
            self.cat_spec = '_name.category_id'
            self.primitive_type = '_type.contents'
            self.cat_id_spec = "_definition.id"
            self.def_id_spec = "_definition.id"
            return(name+version,"DDLm")
        else:   #DDL2
            self.cat_id_spec = "_category.id"
            self.def_id_spec = "_item.name"
            self.key_spec = "_category_mandatory.name"
            self.type_spec = "_item_type.code"
            self.enum_spec = "_item_enumeration.value"
            self.esd_spec = "_item_type_conditions.code"
            self.cat_spec = "_item.category_id"
            self.loop_spec = "there_is_no_loop_spec!"
            self.must_loop_spec = "xxx"
            self.must_exist_spec = "_item.mandatory_code"
            self.child_spec = "_item_linked.child_name"
            self.parent_spec = "_item_linked.parent_name"
            self.related_func = "_item_related.function_code"
            self.related_item = "_item_related.related_name"
            self.unique_spec = "_category_key.name"
            self.list_ref_spec = "xxx"
            self.primitive_type = "_type"
            self.dep_spec = "_item_dependent.dependent_name"
            return (name+version,"DDL2")
    else:
        raise CifError("Unable to determine dictionary DDL version")

DDL1 differences. Firstly, in DDL1 you can loop a _name to get definitions of related names (e.g. x,y,z). Secondly, the data block name is missing the initial underscore, so we need to read the _name value. There is one block without a _name attribute, which we proceed to destroy (exercise for the reader: which one?).

A further complex difference is in the way that ranges are specified. A DDL2 dictionary generally loops the _item_range.maximum/minimum items, in order to specify inclusion of the endpoints of the range, whereas DDL1 dictionaries simply specify ranges as n:m. We translate these values into item_range specifications.

If the _list item is missing for a dictionary definition, it defaults to no, i.e. the item cannot be listed. We explicitly include this in our transformations.

The dictionaries also contain categories, which are used to impose constraints on groupings of items in lists. Category names in DDL2 dictionaries have no leading underscore, and the constraints are stored directly in the category definition. So, with a DDL1 dictionary, we rewrite things to match the DDL2 methods. In particular, the list_uniqueness item becomes the category_key.name attribute of the category. This may apply to _list_mandatory and /or _list_reference to, but the current specification is vague.

Also, it is possible for cross-item references (e.g. in a _list_reference) to include a whole range of items by terminating the name with an underscore. It is then understood to include anything starting with those characters. We explicitly try to expand these references out.

Note the way we convert to DDL2-style type definitions; any definition having a _type_construct regular expression triggers the definition of a whole new type, which is stored as per DDL2, for the later type dictionary construction process to find.

<Deal with DDL1 differences>= (<-U)
def DDL1_normalise(self):
    # switch off block name collision checks
    self.standard = None
    # add default type information in DDL2 style
    # initial types and constructs
    base_types = ["char","numb","null"]
    prim_types = base_types[:]
    base_constructs = [".*",
        '(-?(([0-9]*[.][0-9]+)|([0-9]+)[.]?)([(][0-9]+[)])?([eEdD][+-]?[0-9]+)?)|\\?|\\.',
        "\"\" "]
    for key,value in self.items():
       newnames = [key]  #keep by default
       if "_name" in value:
           real_name = value["_name"]
           if isinstance(real_name,list):        #looped values
               for looped_name in real_name:
                  new_value = value.copy()
                  new_value["_name"] = looped_name  #only looped name
                  self[looped_name] = new_value
               newnames = real_name
           else:
                  self[real_name] = value
                  newnames = [real_name]
       # delete the old one
       if key not in newnames:
          del self[key]
    # loop again to normalise the contents of each definition
    for key,value in self.items():
       #unlock the block
       save_overwrite = value.overwrite
       value.overwrite = True
       # deal with a missing _list, _type_conditions
       if "_list" not in value: value["_list"] = 'no'
       if "_type_conditions" not in value: value["_type_conditions"] = 'none'
       # deal with enumeration ranges
       if "_enumeration_range" in value:
           max,min = self.getmaxmin(value["_enumeration_range"])
           if min == ".":
               self[key].AddLoopItem((("_item_range.maximum","_item_range.minimum"),((max,max),(max,min))))
           elif max == ".":
               self[key].AddLoopItem((("_item_range.maximum","_item_range.minimum"),((max,min),(min,min))))
           else:
               self[key].AddLoopItem((("_item_range.maximum","_item_range.minimum"),((max,max,min),(max,min,min))))
       #add any type construct information
       if "_type_construct" in value:
           base_types.append(value["_name"]+"_type")   #ie dataname_type
           base_constructs.append(value["_type_construct"]+"$")
           prim_types.append(value["_type"])     #keep a record
           value["_type"] = base_types[-1]   #the new type name

    #make categories conform with ddl2
    #note that we must remove everything from the last underscore
       if value.get("_category",None) == "category_overview":
            last_under = value["_name"].rindex("_")
            catid = value["_name"][1:last_under]
            value["_category.id"] = catid  #remove square bracks
            if catid not in self.cat_list: self.cat_list.append(catid)
       value.overwrite = save_overwrite
    # we now add any missing categories before filling in the rest of the
    # information
    for key,value in self.items():
        #print('processing ddl1 definition %s' % key)
        if "_category" in self[key]:
            if self[key]["_category"] not in self.cat_list:
                # rogue category, add it in
                newcat = self[key]["_category"]
                fake_name = "_" + newcat + "_[]"
                newcatdata = CifBlock()
                newcatdata["_category"] = "category_overview"
                newcatdata["_category.id"] = newcat
                newcatdata["_type"] = "null"
                self[fake_name] = newcatdata
                self.cat_list.append(newcat)
    # write out the type information in DDL2 style
    self.master_block.AddLoopItem((
        ("_item_type_list.code","_item_type_list.construct",
          "_item_type_list.primitive_code"),
        (base_types,base_constructs,prim_types)
        ))

DDL2 has a few idiosyncracies of its own. For some reason, in the definition of a parent item, all the child items are listed and their mandatory/not mandatory status specified. This duplicates information under the child item itself, although there is something on the web indicating that this is purely cosmetic and not strictly necessary. For our purposes, we want to extract the mandatory/not mandatory nature of the current item, which appears to be conventionally at the top of the list (we do not assume this below). The only way of determining what the actual item name is is to look at the save frame name, which is a bit of a fragile tactic - especially as dictionary merge operations are supposed to look for _item.name.

So, in these cases, we have to assume the save frame name is the one we want, and find this entry in the list.

Additionally, the child entry doesn't contain the category specification, so we add this into the child entry at the same time, together with a pointer to the parent item.

Such entries then have a loop listing parents and children down the whole hierarchy, starting with the current item. We disentangle this, placing parent item attributes in the child items, moving sub-children down to their level. Sub children may not exist at all, so we create them if necessary.

To make life more interesting, the PDBX have an entry_pc placeholder in which additional (and sometimes repeated) parent-child relationships can be expressed. We cannot assume that any given parent-child relationship is stated at a single site in the file. What is more, it appears that multiple parents for a single child are defined in the _entry.pdbx_pc entry. Our changes to the file pre-checking are therefore restricted to making sure that the child contains information about the parents; we do not interfere with the parent's information about the children, even if we consider that to be superfluous. Note that we will have to add parent-child validity checks to check consistency among all these relationships.

Update: in the DDL-2.1.6 file, only the parents/children are looped, rather than the item names, so we have to check looping separately.

Next: DDL2 contains aliases to DDL1 item names, so in theory we should be able to use a DDL2 dictionary to validate a DDL1-style CIF file. We create separate definition blocks for each alias to enable this.

Also, we flatten out any single-element lists for item_name. This is simply to avoid the value of e.g. category_id being a single-element list instead of a string.

Note also that _item.category_id in DDL2 is 'implicit', meaning in this case that you can determine it from the item name. We add in the category for simplicity.

<Iron out DDL2 strangeness>= (<-U)
<Loopify parent-child relationships>

def DDL2_normalise(self):
   listed_defs = filter(lambda a:isinstance(self[a].get('_item.name'),list),self.keys())
   # now filter out all the single element lists!
   dodgy_defs = filter(lambda a:len(self[a]['_item.name']) > 1, listed_defs)
   for item_def in dodgy_defs:
      <Repopulate child definitions>
   <Populate parent and child links correctly>
   # now flatten any single element lists
   single_defs = filter(lambda a:len(self[a]['_item.name'])==1,listed_defs)
   for flat_def in single_defs:
       flat_keys = self[flat_def].GetLoop('_item.name').keys()
       for flat_key in flat_keys: self[flat_def][flat_key] = self[flat_def][flat_key][0]
   # now deal with the multiple lists
   # next we do aliases
   all_aliases = [a for a in self.keys() if self[a].has_key('_item_aliases.alias_name')]
   for aliased in all_aliases:
      my_aliases = listify(self[aliased]['_item_aliases.alias_name'])
      for alias in my_aliases:
          self[alias] = self[aliased].copy()   #we are going to delete stuff...
          del self[alias]["_item_aliases.alias_name"]

As some DDL2 dictionaries neglect children, we repopulate the skeleton or non-existent definitions that may be provided in the dictionary.

<Repopulate child definitions>= (<-U)
      # print("DDL2 norm: processing %s" % item_def)
      thisdef = self[item_def]
      packet_no = thisdef['_item.name'].index(item_def)
      realcat = thisdef['_item.category_id'][packet_no]
      realmand = thisdef['_item.mandatory_code'][packet_no]
      # first add in all the missing categories
      # we don't replace the entry in the list corresponding to the
      # current item, as that would wipe out the information we want
      for child_no in range(len(thisdef['_item.name'])):
          if child_no == packet_no: continue
          child_name = thisdef['_item.name'][child_no]
          child_cat = thisdef['_item.category_id'][child_no]
          child_mand = thisdef['_item.mandatory_code'][child_no]
          if child_name not in self:
              self[child_name] = CifBlock()
              self[child_name]['_item.name'] = child_name
          self[child_name]['_item.category_id'] = child_cat
          self[child_name]['_item.mandatory_code'] = child_mand
      self[item_def]['_item.name'] = item_def
      self[item_def]['_item.category_id'] = realcat
      self[item_def]['_item.mandatory_code'] = realmand

Populating parent and child links. The DDL2 model uses parent-child relationships to create relational database behaviour. This means that the emphasis is on simply linking two ids together directionally. This link is not necessarily inside a definition that is being linked, but we require that any parents and children are identified within the definition that they relate to. This means we have to sometimes relocate and expand links. As an item can simultaneously be both a parent and a child, we need to explicitly fill in the links even within a single definition.

<Populate parent and child links correctly>= (<-U)
target_defs = [a for a in self.keys() if '_item_linked.child_name' in self[a] or \
                              '_item_linked.parent_name' in self[a]]
# now dodgy_defs contains all definition blocks with more than one child/parent link
for item_def in dodgy_defs: self.create_pcloop(item_def)           #regularise appearance
for item_def in dodgy_defs:
      print('Processing %s' % item_def)
      thisdef = self[item_def]
      child_list = thisdef['_item_linked.child_name']
      parents = thisdef['_item_linked.parent_name']
      # for each parent, find the list of children.
      family = list(zip(parents,child_list))
      notmychildren = family         #We aim to remove non-children
      # Loop over the parents, relocating as necessary
      while len(notmychildren):
         # get all children of first entry
         mychildren = [a for a in family if a[0]==notmychildren[0][0]]
         print("Parent %s: %d children" % (notmychildren[0][0],len(mychildren)))
         for parent,child in mychildren:   #parent is the same for all
                  # Make sure that we simply add in the new entry for the child, not replace it,
                  # otherwise we might spoil the child entry loop structure
                  try:
                      childloop = self[child].GetLoop('_item_linked.parent_name')
                  except KeyError:
                      print('Creating new parent entry %s for definition %s' % (parent,child))
                      self[child]['_item_linked.parent_name'] = [parent]
                      childloop = self[child].GetLoop('_item_linked.parent_name')
                      childloop.AddLoopItem(('_item_linked.child_name',[child]))
                      continue
                  else:
                      # A parent loop already exists and so will a child loop due to the
                      # call to create_pcloop above
                      pars = [a for a in childloop if getattr(a,'_item_linked.child_name','')==child]
                      goodpars = [a for a in pars if getattr(a,'_item_linked.parent_name','')==parent]
                      if len(goodpars)>0:   #no need to add it
                          print('Skipping duplicated parent - child entry in %s: %s - %s' % (child,parent,child))
                          continue
                      print('Adding %s to %s entry' % (parent,child))
                      newpacket = childloop.GetPacket(0)   #essentially a copy, I hope
                      setattr(newpacket,'_item_linked.child_name',child)
                      setattr(newpacket,'_item_linked.parent_name',parent)
                      childloop.AddPacket(newpacket)
         #
         # Make sure the parent also points to the children.  We get
         # the current entry, then add our
         # new values if they are not there already
         #
         parent_name = mychildren[0][0]
         old_children = self[parent_name].get('_item_linked.child_name',[])
         old_parents = self[parent_name].get('_item_linked.parent_name',[])
         oldfamily = zip(old_parents,old_children)
         newfamily = []
         print('Old parents -> %s' % repr(old_parents))
         for jj, childname in mychildren:
             alreadythere = [a for a in oldfamily if a[0]==parent_name and a[1] ==childname]
             if len(alreadythere)>0: continue
             'Adding new child %s to parent definition at %s' % (childname,parent_name)
             old_children.append(childname)
             old_parents.append(parent_name)
         # Now output the loop, blowing away previous definitions.  If there is something
         # else in this category, we are destroying it.
         newloop = CifLoopBlock(dimension=1)
         newloop.AddLoopItem(('_item_linked.parent_name',old_parents))
         newloop.AddLoopItem(('_item_linked.child_name',old_children))
         del self[parent_name]['_item_linked.parent_name']
         del self[parent_name]['_item_linked.child_name']
         self[parent_name].insert_loop(newloop)
         print('New parents -> %s' % repr(self[parent_name]['_item_linked.parent_name']))
         # now make a new,smaller list
         notmychildren = [a for a in notmychildren if a[0]!=mychildren[0][0]]

In order to handle parent-child relationships in a regular way, we want to assume that all parent-child entries occur in a loop, with both members present. This routine does that for us. If the parent is missing, it is assumed to be the currently-defined item. If the child is missing, likewise.

<Loopify parent-child relationships>= (<-U)
def create_pcloop(self,definition):
    old_children = self[definition].get('_item_linked.child_name',[])
    old_parents = self[definition].get('_item_linked.parent_name',[])
    if isinstance(old_children,unicode):
         old_children = [old_children]
    if isinstance(old_parents,unicode):
         old_parents = [old_parents]
    if (len(old_children)==0 and len(old_parents)==0) or \
       (len(old_children) > 1 and len(old_parents)>1):
         return
    if len(old_children)==0:
         old_children = [definition]*len(old_parents)
    if len(old_parents)==0:
         old_parents = [definition]*len(old_children)
    newloop = CifLoopBlock(dimension=1)
    newloop.AddLoopItem(('_item_linked.parent_name',old_parents))
    newloop.AddLoopItem(('_item_linked.child_name',old_children))
    try:
        del self[definition]['_item_linked.parent_name']
        del self[definition]['_item_linked.child_name']
    except KeyError:
        pass
    self[definition].insert_loop(newloop)


Loading the DDL1 categories with DDL2-type information. DDL2 people wisely put category-wide information in the category definition rather than spreading it out between category items. We collect this information together here.

This routine is the big time-waster in initialising a DDL1 dictionary, so we have attempted to optimize it by locally defining functions, instead of using lambdas, and making one loop through the dictionary instead of hundreds.

<Load categories with DDL2-type information>= (<-U)
def ddl1_cat_load(self):
    deflist = self.keys()       #slight optimization
    cat_mand_dic = {}
    cat_unique_dic = {}
    # a function to extract any necessary information from each definition
    def get_cat_info(single_def):
        if self[single_def].get(self.must_exist_spec)=='yes':
            thiscat = self[single_def]["_category"]
            curval = cat_mand_dic.get(thiscat,[])
            curval.append(single_def)
            cat_mand_dic[thiscat] = curval
        # now the unique items...
        # cif_core.dic throws us a curly one: the value of list_uniqueness is
        # not the same as the defined item for publ_body_label, so we have
        # to collect both together.  We assume a non-listed entry, which
        # is true for all current (May 2005) ddl1 dictionaries.
        if self[single_def].get(self.unique_spec,None)!=None:
            thiscat = self[single_def]["_category"]
            new_unique = self[single_def][self.unique_spec]
            uis = cat_unique_dic.get(thiscat,[])
            if single_def not in uis: uis.append(single_def)
            if new_unique not in uis: uis.append(new_unique)
            cat_unique_dic[thiscat] = uis

    [get_cat_info(a) for a in deflist] # apply the above function
    for cat in cat_mand_dic.keys():
        self[cat]["_category_mandatory.name"] = cat_mand_dic[cat]
    for cat in cat_unique_dic.keys():
        self[cat]["_category_key.name"] = cat_unique_dic[cat]

A dataname can appear in a file under a different name if it has been aliased. We create an alias table to speed up lookup. The table is indexed by true name, with a list of alternatives.

<Create alias table>= (<-U)
def create_alias_table(self):
    """Populate an alias table that we can look up when searching for a dataname"""
    all_aliases = [a for a in self.keys() if '_alias.definition_id' in self[a]]
    self.alias_table = dict([[a,self[a]['_alias.definition_id']] for a in all_aliases])

DDLm internally refers to data items by the category.object notation, with the twist that child categories of loops can have their objects appear in the parent category. So this table prepares a complete list of (cat,obj):dataname correspondences, as the implementation of parent-child requires looking up a table each time searching for children.

The recursive expand_base_table function returns a dictionary of (name,definition_id) pairs indexing the corresponding datanames. We must catch any keys and exclude them from this process, as they are allowed to have the same object_id as their parent key in the enclosing datablock and will overwrite the entry for the parent key if left in. We also note that the example dictionary allows these types of name collisions if an item is intended to be identical (e.g. _atom_site_aniso.type_symbol and atom_site.type_symbol), so we create a short list of possible alternative names for each (cat,obj) pair.

The create_nested_key_table stores information about which keys index child categories. This way applications can search for any loops containing these keys and expand packets for dREL accordingly.

<Create category/object table>= (<-U)
def create_cat_obj_table(self):
    """Populate a table indexed by (cat,obj) and returning the correct dataname"""
    base_table = dict([((self[a].get('_name.category_id','').lower(),self[a].get('_name.object_id','').lower()),[self[a].get('_definition.id','')]) \
                       for a in self.keys() if self[a].get('_definition.scope','Item')=='Item'])
    loopable = self.get_loopable_cats()
    loopers = [self.ddlm_immediate_children(a) for a in loopable]
    print('Loopable cats:' + repr(loopable))
    loop_children = [[b for b in a if b.lower() in loopable ] for a in loopers]
    expand_list = dict([(a,b) for a,b in zip(loopable,loop_children) if len(b)>0])
    print("Expansion list:" + repr(expand_list))
    extra_table = {}   #for debugging we keep it separate from base_table until the end
    def expand_base_table(parent_cat,child_cats):
        extra_names = []
        # first deal with all the child categories
        for child_cat in child_cats:
          nn = []
          if child_cat in expand_list:  # a nested category: grab its names
            nn = expand_base_table(child_cat,expand_list[child_cat])
            # store child names
            extra_names += nn
          # add all child names to the table
          child_names = [(self[n]['_name.object_id'].lower(),self[n]['_definition.id']) \
                         for n in self.names_in_cat(child_cat) if self[n].get('_type.purpose','') != 'Key']
          child_names += extra_names
          extra_table.update(dict([((parent_cat,obj),[name]) for obj,name in child_names if (parent_cat,name) not in extra_table]))
        # and the repeated ones get appended instead
        repeats = [a for a in child_names if a in extra_table]
        for obj,name in repeats:
            extra_table[(parent_cat,obj)] += [name]
        # and finally, add our own names to the return list
        child_names += [(self[n]['_name.object_id'].lower(),self[n]['_definition.id']) \
                        for n in self.names_in_cat(parent_cat) if self[n].get('_type.purpose','')!='Key']
        return child_names
    [expand_base_table(parent,child) for parent,child in expand_list.items()]
    print('Expansion cat/obj values: ' + repr(extra_table))
    # append repeated ones
    non_repeats = dict([a for a in extra_table.items() if a[0] not in base_table])
    repeats = [a for a in extra_table.keys() if a in base_table]
    base_table.update(non_repeats)
    for k in repeats:
        base_table[k] += extra_table[k]
    self.cat_obj_lookup_table = base_table
    self.loop_expand_list = expand_list

def get_loopable_cats(self):
    """A short utility function which returns a list of looped categories. This
    is preferred to a fixed attribute as that fixed attribute would need to be
    updated after any edits"""
    return [a.lower() for a in self.keys() if self[a].get('_definition.class','')=='Loop']

def create_cat_key_table(self):
    """Create a utility table with a list of keys applicable to each category. A key is
    a compound key, that is, it is a list"""
    self.cat_key_table = dict([(c,[listify(self[c].get("_category_key.name",
        [self[c].get("_category.key_id")]))]) for c in self.get_loopable_cats()])
    def collect_keys(parent_cat,child_cats):
            kk = []
            for child_cat in child_cats:
                if child_cat in self.loop_expand_list:
                    kk += collect_keys(child_cat)
                # add these keys to our list
                kk += [listify(self[child_cat].get('_category_key.name',[self[child_cat].get('_category.key_id')]))]
            self.cat_key_table[parent_cat] = self.cat_key_table[parent_cat] + kk
            return kk
    for k,v in self.loop_expand_list.items():
        collect_keys(k,v)
    print('Keys for categories' + repr(self.cat_key_table))

CIF Dictionaries use the square bracket notation to refer to the definition, as for CifFile objects, but the key is the definition itself, rather than the block name. So we have to create a lookup table. However, template dictionaries may not have a _definition.id, which means we have to revert to their blockname, so we use blockname as a default. We also completely ignore case, which is a bit liberal, as definitions themselves are case-sensitive. We catch duplicate definitions (e.g. as a result of incorrect merging).

If a definition is not found, we search any dictionaries that were imported in 'Full' mode. This means that definitions in the dictionary proper override anything in the imported dictionaries, as per definitions.

<Repurpose standard python methods>= (<-U)
def create_def_block_table(self):
    """ Create an internal table matching definition to block id """
    proto_table = [(super(CifDic,self).__getitem__(a),a) for a in super(CifDic,self).keys()]
    # now get the actual ids instead of blocks
    proto_table = list([(a[0].get(self.cat_id_spec,a[0].get(self.def_id_spec,a[1])),a[1]) for a in proto_table])
    # remove non-definitions
    if self.diclang != "DDL1":
        top_blocks = list([a[0].lower() for a in self.get_roots()])
    else:
        top_blocks = ["on_this_dictionary"]
    # catch dodgy duplicates
    uniques = set([a[0] for a in proto_table])
    if len(uniques)<len(proto_table):
        def_names = list([a[0] for a in proto_table])
        dodgy = [a for a in def_names if def_names.count(a)>1]
        raise CifError('Duplicate definitions in dictionary:' + repr(dodgy))
    self.block_id_table = dict([(a[0].lower(),a[1].lower()) for a in proto_table if a[1].lower() not in top_blocks])

def __getitem__(self,key):
    """Access a datablock by definition id, after the lookup has been created"""
    try:
        return super(CifDic,self).__getitem__(self.block_id_table[key.lower()])
    except AttributeError:   #block_id_table not present yet
        return super(CifDic,self).__getitem__(key)
    except KeyError: # key is missing
        try: # print(Definition for %s not found, reverting to CifFile' % key)
            return super(CifDic,self).__getitem__(key)
        except KeyError: # try imports
            return self.lookup_imports(key)

def __setitem__(self,key,value):
    """Add a new definition block"""
    super(CifDic,self).__setitem__(key,value)
    try:
        self.block_id_table[value['_definition.id']]=key
    except AttributeError:   #does not exist yet
        pass

def NewBlock(self,*args,**kwargs):
    newname = super(CifDic,self).NewBlock(*args,**kwargs)
    try:
        self.block_id_table[self[newname]['_definition.id']]=newname
    except AttributeError: #no block_id table
        pass
            
def __delitem__(self,key):
    """Remove a definition"""
    try:
        super(CifDic,self).__delitem__(self.block_id_table[key.lower()])
        del self.block_id_table[key.lower()]
    except (AttributeError,KeyError):   #block_id_table not present yet
        super(CifDic,self).__delitem__(key)
        return
    # fix other datastructures
    # cat_obj table

def keys(self):
    """Return all definitions"""
    try:
        return self.block_id_table.keys()
    except AttributeError:
        return super(CifDic,self).keys()

def has_key(self,key):
    return key in self

def __contains__(self,key):
    try:
        return key.lower() in self.block_id_table
    except AttributeError:
        return super(CifDic,self).__contains__(key)

def items(self):
    """Return (key,value) pairs"""
    return list([(a,self[a]) for a in self.keys()])

Any Starfile method that uses the square-bracket notation or build-in syntax (e.g. del) to access keys may fail if the set of keys it uses is not that provided by the keys() method above, as the object delegation using super() does not apply. As we have set up our methods above to 'fall through' to the underlying CifFile, the process of renaming may or may not have called our del method to remove the definition, so we check.

<Repurpose Starfile methods>= (<-U)
def unlock(self):
    """Allow overwriting of all definitions in this collection"""
    for a in self.keys():
        self[a].overwrite=True

def lock(self):
    """Disallow changes in definitions"""
    for a in self.keys():
        self[a].overwrite=False

def rename(self,oldname,newname,blockname_as_well=True):
    """Change a _definition.id from oldname to newname, and if `blockname_as_well` is True,
    change the underlying blockname too."""
    if blockname_as_well:
        super(CifDic,self).rename(self.block_id_table[oldname.lower()],newname)
        self.block_id_table[newname.lower()]=newname
        if oldname.lower() in self.block_id_table: #not removed
           del self.block_id_table[oldname.lower()]
    else:
        self.block_id_table[newname.lower()]=self.block_id_table[oldname.lower()]
        del self.block_id_table[oldname.lower()]
        return

Semantic information

<Obtaining semantic information from the dictionary>= (<-U)
<Operations with semantic children>
<Get category information>
<List all items in a category>

For convenience we provide ways of interrogating the semantic tree of categories. Note that if we are passed the top-level datablock, the semantic children are the syntactic children. An additional method finds the 'dangling' definitions, which are definitions that have no category definition present - these might be definitions added by this dictionary to categories found in other dictionaries.

<Operations with semantic children>= (<-U)
def get_root_category(self):
    """Get the single 'Head' category of this dictionary"""
    root_cats = [r for r in self.keys() if self[r].get('_definition.class','Datum')=='Head']
    if len(root_cats)>1 or len(root_cats)==0:
        raise CifError("Cannot determine a unique Head category, got" % repr(root_cats))
    return root_cats[0]

def ddlm_immediate_children(self,catname):
    """Return a list of datanames for the immediate children of catname.  These are
    semantic children (i.e. based on _name.category_id), not structural children as
    in the case of StarFile.get_immediate_children"""

    straight_children = [a for a in self.keys() if self[a].get('_name.category_id','').lower() == catname.lower()]
    return list(straight_children)

def ddlm_all_children(self,catname):
    """Return a list of all children, including the `catname`"""
    all_children = self.ddlm_immediate_children(catname)
    cat_children = [a for a in all_children if self[a].get('_definition.scope','Item') == 'Category']
    for c in cat_children:
        all_children.remove(c)
        all_children += self.ddlm_all_children(c)
    return all_children + [catname]

def is_semantic_child(self,parent,maybe_child):
    """Return true if `maybe_child` is a child of `parent`"""
    all_children = self.ddlm_all_children(parent)
    return maybe_child in all_children

def ddlm_danglers(self):
    """Return a list of definitions that do not have a category defined
    for them, or are children of an unattached category"""
    top_block = self.get_root_category()
    connected = set(self.ddlm_all_children(top_block))
    all_keys = set(self.keys())
    unconnected = all_keys - connected
    return list(unconnected)

def get_ddlm_parent(self,itemname):
    """Get the parent category of itemname"""
    parent = self[itemname].get('_name.category_id','')
    if parent == '':  # use the top block by default
        raise CifError("%s has no parent" % itemname)
    return parent

Some methods for interrogating categories for names.

<Get category information>= (<-U)
def expand_category_opt(self,name_list):
    """Return a list of all non-category items in a category or return the name
       if the name is not a category"""
    new_list = []
    for name in name_list:
      if self.get(name,{}).get('_definition.scope','Item') == 'Category':
        new_list += self.expand_category_opt([a for a in self.keys() if \
                 self[a].get('_name.category_id','').lower() == name.lower()])
      else:
        new_list.append(name)
    return new_list

def get_categories(self):
    """Return a list of category names"""
    return list([c for c in self.keys() if self[c].get("_definition.scope")=='Category'])

This method was added to facilitate running dREL scripts, which treat certain variables as having attributes which all belong to a single category. We return only the extension in keeping with dREL syntax. If names_only is true, we return only the object part of the dataname. Note that sub categories are excluded. TODO: use cat-obj table for speed.

<List all items in a category>= (<-U)
def names_in_cat(self,cat,names_only=False):
    names = [a for a in self.keys() if self[a].get('_name.category_id','').lower()==cat.lower()]
    if not names_only:
        return list([a for a in names if self[a].get('_definition.scope','Item')=='Item'])
    else:
        return list([self[a]["_name.object_id"] for a in names])


DDLm introduces validity information in the enclosing datablock. It is a loop of scope, attribute values where the scope is one of dictionary (everywhere), category (whole category) and item (just the single definition). Validity can be mandatory, encouraged or not allowed. It only appears in the DDLm attributes dictionary, so this information is blank unless we are dealing with the DDLm dictionary.

<Parse DDLm validity information>= (<-U)
def ddlm_parse_valid(self):
    if "_dictionary_valid.application" not in self.master_block:
        return
    for scope_pack in self.master_block.GetLoop("_dictionary_valid.application"):
        scope = getattr(scope_pack,"_dictionary_valid.application")
        valid_info = getattr(scope_pack,"_dictionary_valid.attributes")
        if scope[1] == "Mandatory":
            self.scopes_mandatory[scope[0]] = self.expand_category_opt(valid_info)
        elif scope[1] == "Prohibited":
            self.scopes_naughty[scope[0]] = self.expand_category_opt(valid_info)

These methods were added when developing interactive editing tools, which allow shifting categories around.

<Definition manipulation methods>= (<-U)
<Changing and updating categories>
<Getting category information>

Changing a category name involves changing the _name.category_id in all children as well as the category definition itself and datablock names, then updating our internal structures.

<Changing and updating categories>= (<-U)
def change_category_name(self,oldname,newname):
    self.unlock()
    """Change the category name from [[oldname]] to [[newname]]"""
    if oldname not in self:
        raise KeyError('Cannot rename non-existent category %s to %s' % (oldname,newname))
    if newname in self:
        raise KeyError('Cannot rename %s to %s as %s already exists' % (oldname,newname,oldname))
    child_defs = self.ddlm_immediate_children(oldname)
    self.rename(oldname,newname)   #NB no name integrity checks
    self[newname]['_name.object_id']=newname
    self[newname]['_definition.id']=newname
    for child_def in child_defs:
        self[child_def]['_name.category_id'] = newname
        if self[child_def].get('_definition.scope','Item')=='Item':
            newid = self.create_catobj_name(newname,self[child_def]['_name.object_id'])
            self[child_def]['_definition.id']=newid
            self.rename(child_def,newid[1:])  #no underscore at the beginning
    self.lock()

def create_catobj_name(self,cat,obj):
    """Combine category and object in approved fashion to create id"""
    return ('_'+cat+'.'+obj)

def change_category(self,itemname,catname):
    """Move itemname into catname, return new handle"""
    defid = self[itemname]
    if defid['_name.category_id'].lower()==catname.lower():
        print('Already in category, no change')
        return itemname
    if catname not in self:    #don't have it
        print('No such category %s' % catname)
        return itemname
    self.unlock()
    objid = defid['_name.object_id']
    defid['_name.category_id'] = catname
    newid = itemname # stays the same for categories
    if defid.get('_definition.scope','Item') == 'Item':
        newid = self.create_catobj_name(catname,objid)
        defid['_definition.id']= newid
        self.rename(itemname,newid)
    self.set_parent(catname,newid)
    self.lock()
    return newid

def change_name(self,one_def,newobj):
    """Change the object_id of one_def to newobj. This is not used for
    categories, but can be used for dictionaries"""
    if '_dictionary.title' not in self[one_def]:  #a dictionary block
        newid = self.create_catobj_name(self[one_def]['_name.category_id'],newobj)
        self.unlock()
        self.rename(one_def,newid)
        self[newid]['_definition.id']=newid
        self[newid]['_name.object_id']=newobj
    else:
        self.unlock()
        newid = newobj
        self.rename(one_def,newobj)
        self[newid]['_dictionary.title'] = newid
    self.lock()
    return newid

# Note that our semantic parent is given by catparent, but our syntactic parent is
# always just the root block
def add_category(self,catname,catparent=None,is_loop=True,allow_dangler=False):
    """Add a new category to the dictionary with name [[catname]].
       If [[catparent]] is None, the category will be a child of
       the topmost 'Head' category or else the top data block. If
       [[is_loop]] is false, a Set category is created. If [[allow_dangler]]
       is true, the parent category does not have to exist."""
    if catname in self:
        raise CifError('Attempt to add existing category %s' % catname)
    self.unlock()
    syntactic_root = self.get_roots()[0][0]
    if catparent is None:
        semantic_root = [a for a in self.keys() if self[a].get('_definition.class',None)=='Head']
        if len(semantic_root)>0:
            semantic_root = semantic_root[0]
        else:
            semantic_root = syntactic_root
    else:
        semantic_root = catparent
    realname = super(CifDic,self).NewBlock(catname,parent=syntactic_root)
    self.block_id_table[catname.lower()]=realname
    self[catname]['_name.object_id'] = catname
    if not allow_dangler or catparent is None:
        self[catname]['_name.category_id'] = self[semantic_root]['_name.object_id']
    else:
        self[catname]['_name.category_id'] = catparent
    self[catname]['_definition.id'] = catname
    self[catname]['_definition.scope'] = 'Category'
    if is_loop:
        self[catname]['_definition.class'] = 'Loop'
    else:
        self[catname]['_definition.class'] = 'Set'
    self[catname]['_description.text'] = 'No definition provided'
    self.lock()
    return catname

def add_definition(self,itemname,catparent,def_text='PLEASE DEFINE ME',allow_dangler=False):
    """Add itemname to category [[catparent]]. If itemname contains periods,
    all text before the final period is ignored. If [[allow_dangler]] is True,
    no check for a parent category is made."""
    self.unlock()
    if '.' in itemname:
        objname = itemname.split('.')[-1]
    else:
        objname = itemname
    objname = objname.strip('_')
    if not allow_dangler and (catparent not in self or self[catparent]['_definition.scope']!='Category'):
        raise CifError('No category %s in dictionary' % catparent)
    fullname = '_'+catparent.lower()+'.'+objname
    print('New name: %s' % fullname)
    syntactic_root = self.get_roots()[0][0]
    realname = super(CifDic,self).NewBlock(fullname, fix=False, parent=syntactic_root) #low-level change
    # update our dictionary structures
    self.block_id_table[fullname]=realname
    self[fullname]['_definition.id']=fullname
    self[fullname]['_name.object_id']=objname
    self[fullname]['_name.category_id']=catparent
    self[fullname]['_definition.class']='Datum'
    self[fullname]['_description.text']=def_text
    return realname

def remove_definition(self,defname):
    """Remove a definition from the dictionary."""
    if defname not in self:
        return
    if self[defname].get('_definition.scope')=='Category':
        children = self.ddlm_immediate_children(defname)
        [self.remove_definition(a) for a in children]
        cat_id = self[defname]['_definition.id'].lower()
    del self[defname]

The DDLm architecture identifies a data definition by (category,object) which identifies a unique textual dataname appearing in the data file. Because of category joins when nested categories are looped, a single dataname may be referred to by several different category identifiers. The get_name_by_cat_obj routine will search all loop categories within the given category hierarchy until it finds the appropriate one.

If give_default is True, the default construction '_catid.objid' is returned if nothin is found in the dictionary. This should only be used during testing as the lack of a corresponding definition in the dictionary means that it is unlikely that anything sensible will result.

<Getting category information>= (<-U)
def get_cat_obj(self,name):
    """Return (cat,obj) tuple. [[name]] must contain only a single period"""
    cat,obj = name.split('.')
    return (cat.strip('_'),obj)

def get_name_by_cat_obj(self,category,object,give_default=False):
    """Return the dataname corresponding to the given category and object"""
    if category[0] == '_':    #accidentally left in
       true_cat = category[1:].lower()
    else:
       true_cat = category.lower()
    try:
        return self.cat_obj_lookup_table[(true_cat,object.lower())][0]
    except KeyError:
        if give_default:
           return '_'+true_cat+'.'+object
    raise KeyError('No such category,object in the dictionary: %s %s' % (true_cat,object))

Dictionaries have the category-wide information in the category definition area. We do not need to fill all of this in if we are not planning on running dREL.

<Add category information>= (<-U)
def add_category_info(self,full=True):
    if self.diclang == "DDLm":
        <Create category parent table>
        if full:
            <Create key hierarchy>
    else:
        self.parent_lookup = {}
        self.key_equivs = {}

This method was added for DDLm support. We are passed a category and a value, and must find a packet which has a matching key. We use the keyname as a way of finding the loop.

<Return a single packet by key>= (<-U)
def get_key_pack(self,category,value,data):
    keyname = self[category][self.unique_spec]
    onepack = data.GetPackKey(keyname,value)
    return onepack

For help in validation we create a lookup table which matches a category to its ultimate parent. This allows us to quickly check whether or not a data item is allowed to be co-looped with other data items. Note that we may have to draw in external dictionaries to do this properly, but to avoid holding the whole lot in memory, we simply stop searching up the parent tree if the parent block is missing.

<Create category parent table>= (<-U)
catblocks = [c for c in self.keys() if self[c].get('_definition.scope')=='Category']
looped_cats = [a for a in catblocks if self[a].get('_definition.class','Set') == 'Loop']
self.parent_lookup = {}
for one_cat in looped_cats:
    parent_cat = one_cat
    parent_def = self[parent_cat]
    next_up = parent_def['_name.category_id'].lower()
    while next_up in self and self[next_up].get('_definition.class','Set') == 'Loop':
        parent_def = self[next_up]
        parent_cat = next_up
        next_up = parent_def['_name.category_id'].lower()
    self.parent_lookup[one_cat] = parent_cat

The key hierarchy. This is in many ways reinventing the parent-child relationships that are laid out in DDL2 definitions. In order to access a particular packet using multiple datanames as compound keys, we need to be aware of which keys are related to which other keys. Relationships are always made explicit via the '_name.linked_item_id' attribute in DDLm, which always points to the parent. This is always present, even though it may be often be inferred using Loop category parent/child relationships, as compound keys in categories might introduce ambiguity.

This datastructure allows us to provide a key, and obtain a list of equivalent keys, being all those above it in the hierarchy, that is, which it can be replaced by. If we are not doing dREL, we can afford to skip this.

<Create key hierarchy>= (<-U)
self.key_equivs = {}
for one_cat in looped_cats:   #follow them up
    lower_keys = listify(self[one_cat]['_category_key.name'])
    start_keys = lower_keys[:]
    while len(lower_keys)>0:
        this_cat = self[lower_keys[0]]['_name.category_id']
        parent = [a for a in looped_cats if self[this_cat]['_name.category_id'].lower()==a]
        #print(Processing %s, keys %s, parent %s" % (this_cat,repr(lower_keys),repr(parent)))
        if len(parent)>1:
            raise CifError("Category %s has more than one parent: %s" % (one_cat,repr(parent)))
        if len(parent)==0: break
        parent = parent[0]
        parent_keys = listify(self[parent]['_category_key.name'])
        linked_keys = [self[l]["_name.linked_item_id"] for l in lower_keys]
        # sanity check
        if set(parent_keys) != set(linked_keys):
            raise CifError("Parent keys and linked keys are different! %s/%s" % (parent_keys,linked_keys))
            # now add in our information
        for parent,child in zip(linked_keys,start_keys):
            self.key_equivs[child] = self.key_equivs.get(child,[])+[parent]
        lower_keys = linked_keys  #preserves order of start keys

SectionDDLm functionality

DDLm is a far more complex dictionary standard than DDL2. We are able to import definitions in two modes, "Full" and "Contents". "Contents" simply copies the attributes found in the target definition, and is useful as a templating mechanism for commonly-seen attributes. "Full" brings in the entire definition block and all child definitions, and is useful for including entire dictionaries. As a special case, if we import a 'Head' definition into a 'Head' definition, we actually make all non-Head categories of the imported dictionary into child categories of the importing dictionary 'Head' category, and the imported 'Head' category disappears.

``Contents'' and ``Full'' modes are implemented dynamically, that is, when the value of an attribute is requested the dictionary resolves imports.

The merging method of the StarFile object is purely syntactic and so does not understand DDLm relationships. We add all blocks as the children of the top-level dictionary block, and then in the case of a new 'Head' block we simply reparent the immediate semantic children of the old 'Head' block.

<Perform DDLm imports>= (<-U)
<Get import information>
<Heavy import routine>
<Light import routine>
<Lookup imports for whole dictionary>

<Get import information>= (<-U)
def obtain_imports(self,import_mode,heavy=False):
    """Collate import information"""
    self._import_dics = []
    import_frames = list([(a,self[a]['_import.get']) for a in self.keys() if '_import.get' in self[a]])
    print('Import mode %s applied to following frames' % import_mode)
    print(str([a[0] for a in import_frames]))
    if import_mode != 'All':
       for i in range(len(import_frames)):
            import_frames[i] = (import_frames[i][0],[a for a in import_frames[i][1] if a.get('mode','Contents').lower() == import_mode.lower()])
       print('Importing following frames in mode %s' % import_mode)
       print(str(import_frames))
    #resolve all references
    for parent_block,import_list in import_frames:
      for import_ref in import_list:
        file_loc = import_ref["file"]
        full_uri = self.resolve_path(file_loc)
        if full_uri not in self.template_cache:
            dic_as_cif = CifFile(full_uri,grammar=self.grammar)
            self.template_cache[full_uri] = CifDic(dic_as_cif,do_imports=import_mode,heavy=heavy,do_dREL=False)  #this will recurse internal imports
            print('Added %s to cached dictionaries' % full_uri)
        import_from = self.template_cache[full_uri]
        dupl = import_ref.get('dupl','Exit')
        miss = import_ref.get('miss','Exit')
        target_key = import_ref["save"]
        try:
            import_target = import_from[target_key]
        except KeyError:
            if miss == 'Exit':
               raise CifError('Import frame %s not found in %s' % (target_key,full_uri))
            else: continue
        # now import appropriately
        mode = import_ref.get("mode",'Contents').lower()
        if target_key in self and mode=='full':  #so blockname will be duplicated
            if dupl == 'Exit':
                raise CifError('Import frame %s already in dictionary' % target_key)
            elif dupl == 'Ignore':
                continue
        if heavy:
            self.ddlm_import(parent_block,import_from,import_target,target_key,mode)
        else:
            self.ddlm_import_light(parent_block,import_from,import_target,target_key,file_loc,mode)
            

The original way of doing imports was to completely merge the information from the imported file. This is slightly more efficient if information about import statements is not required.

<Heavy import routine>= (<-U)
def ddlm_import(self,parent_block,import_from,import_target,target_key,mode='All'):
        """Import other dictionaries in place"""
        if mode == 'contents':   #merge attributes only
            self[parent_block].merge(import_target)
        elif mode =="full":
            # Do the syntactic merge
            syntactic_head = self[self.get_parent(parent_block)] #root frame if no nesting
            from_cat_head = import_target['_name.object_id']
            child_frames = import_from.ddlm_all_children(from_cat_head)
             # Check for Head merging Head
            if self[parent_block].get('_definition.class','Datum')=='Head' and \
               import_target.get('_definition.class','Datum')=='Head':
                  head_to_head = True
            else:
                  head_to_head = False
                  child_frames.remove(from_cat_head)
            # As we are in syntax land, we call the CifFile methods
            child_blocks = list([import_from.block_id_table[a.lower()] for a in child_frames])
            child_blocks = super(CifDic,import_from).makebc(child_blocks)
            # Prune out any datablocks that have identical definitions
            from_defs = dict([(a,child_blocks[a].get('_definition.id','').lower()) for a in child_blocks.keys()])
            double_defs = list([b for b in from_defs.items() if self.has_key(b[1])])
            print('Definitions for %s superseded' % repr(double_defs))
            for b in double_defs:
                del child_blocks[b[0]]
            super(CifDic,self).merge_fast(child_blocks,parent=syntactic_head)      #
            print('Syntactic merge of %s (%d defs) in %s mode, now have %d defs' % (target_key,len(child_frames),
               mode,len(self)))
            # Now the semantic merge
            # First expand our definition <-> blockname tree
            self.create_def_block_table()
            merging_cat = self[parent_block]['_name.object_id']      #new parent
            if head_to_head:
                child_frames = self.ddlm_immediate_children(from_cat_head)    #old children
                #the new parent is the importing category for all old children
                for f in child_frames:
                    self[f].overwrite = True
                    self[f]['_name.category_id'] = merging_cat
                    self[f].overwrite = False
                # remove the old head
                del self[from_cat_head]
                print('Semantic merge: %d defs reparented from %s to %s' % (len(child_frames),from_cat_head,merging_cat))
            else:  #imported category is only child
                from_frame = import_from[target_key]['_definition.id'] #so we can find it
                child_frame = [d for d in self.keys() if self[d]['_definition.id']==from_frame][0]
                self[child_frame]['_name.category_id'] = merging_cat
                print('Semantic merge: category for %s : now %s' % (from_frame,merging_cat))
        # it will never happen again...
        del self[parent_block]["_import.get"]

def resolve_path(self,file_loc):
    url_comps = urlparse(file_loc)
    if url_comps[0]: return file_loc    #already full URI
    new_url = urljoin(self.my_uri,file_loc)
    #print("Transformed %s to %s for import " % (file_loc,new_url))
    return new_url

It is possible to not perform imports at reading time, but simply to register the links and resolve the imports if and when a definition is accessed.

<Light import routine>= (<-U)
def ddlm_import_light(self,parent_block,import_from,import_target,target_key,file_loc,mode='All'):
    """Register the imported dictionaries but do not alter any definitions. `parent_block`
    contains the id of the block that is importing. `import_target` is the block that
    should be imported. `import_from` is the CifFile that contains the definitions."""
    if mode == 'contents':   #merge attributes only
        self[parent_block].add_dict_cache(file_loc,import_from)
    elif mode =="full":
         # Check for Head merging Head
        if self[parent_block].get('_definition.class','Datum')=='Head' and \
           import_target.get('_definition.class','Datum')=='Head':
               head_to_head = True
        else:
               head_to_head = False
        # Figure out the actual definition ID
        head_id = import_target["_definition.id"]
        # Adjust parent information
        merging_cat = self[parent_block]['_name.object_id']
        from_cat_head = import_target['_name.object_id']
        if not head_to_head:   # imported category is only child
            import_target["_name.category_id"]=merging_cat
        self._import_dics = [(import_from,head_id)]+self._import_dics #prepend

Lightweight importation simply records the import information without performing the import, and then when keys are accessed it checks through the imported dictionaries. The semantics are such that the last dictionary imported should be the first dictionary checked, as imports overwrite any definitions in preceding imports.

<Lookup imports for whole dictionary>= (<-U)
def lookup_imports(self,key):
    """Check the list of imported dictionaries for this definition"""
    for one_dic,head_def in self._import_dics:
        from_cat_head = one_dic[head_def]['_name.object_id']
        possible_keys = one_dic.ddlm_all_children(from_cat_head)
        if key in possible_keys:
            return one_dic[key]
    raise KeyError("%s not found in import dictionaries" % key)
    

Merging a whole dictionary. A dictionary is a collection of categories for the purposes of merging (later we may want to keep some audit information).

<Add another DDLM dictionary>=
def get_whole_dict(self,source_dict,on_dupl,on_miss):
    for source_cat in source_dict.get_categories():
        self.get_one_cat(source_dict,source_cat,on_dupl,on_miss)

Merging a single category. If this category does not exist, we simply add the category block and any members of the category. If it does exist, we use the 'on_dupl' flag to resolve our behaviour, either ignoring, replacing, or dying a horrible death.

If the specified block is missing in the external dictionary, we either skip it or die a horrible death.

<Add an external DDLM category>=
def get_one_cat(self,source_dict,source_cat,on_dupl,on_miss):
    ext_cat = source_dict.get(source_cat,"")
    this_cat = self.get(source_cat,"")
    print("Adding category %s" % source_cat)
    if not ext_cat:
        if on_miss == "Ignore":
           pass
        else:
           raise CifError("Missing category %s" % source_cat)
    else:
        all_ext_defns = source_dict.keys()
        cat_list = filter(lambda a:source_dict[a].get("_name.category_id","").lower()==source_cat.lower(),
                           all_ext_defns)
        print("Items: %s" % repr(cat_list))
        if this_cat:     # The category block itself is duplicated
            if on_dupl=="Ignore":
                pass
            elif on_dupl == "Exit":
                raise CifError("Duplicate category %s" % source_cat)
            else:
                self[source_cat] = ext_cat
        else:
            self[source_cat] = ext_cat
        # now do all member definitions
        for cat_defn in cat_list:
            self.add_one_defn(source_dict,cat_defn,on_dupl)

def add_one_defn(self,source_dict,cat_defn,on_dupl):
    if cat_defn in self:
       if on_dupl == "Ignore": pass
       elif on_dupl == "Exit":
               raise CifError("Duplicate definition %s" % cat_defn)
       else: self[cat_defn] = source_dict[cat_defn]
    else: self[cat_defn] = source_dict[cat_defn]
    print("    "+cat_defn)

This actually follows the children of the category down. We get a list of child categories and add them one by one recursively.

<Add an external DDLM category with children>=
def get_one_cat_with_children(self,source_dict,source_cat,on_dupl,on_miss):
    self.get_one_cat(source_dict,source_cat,on_dupl,on_miss)
    child_cats = [a for a in source_dict.get_categories() if source_dict[a]["_category.parent_id"]==source_dict[source_cat]["_definition.id"]]
    for child_cat in child_cats: self.get_one_cat(source_dict,child_cat,on_dupl,on_miss)

Importing into definitions. We are adjusting only the attributes of a single definition.

<Add attributes to definitions>=
def import_attributes(self,mykey,source_dict,source_def,on_dupl,on_miss):
    # process missing
    if source_def not in source_dict:
        if on_miss == 'Exit':
            raise CifError('Missing definition for import %s' % source_def)
        else: return          #nothing else to do
    # now do the import
    print('Adding attributes from %s to %s' % (source_def,mykey))
    self[mykey].merge(source_dict[source_def],mode='replace',match_att= \
          ['_definition.id','_name.category_id','_name.object_id'])

def import_loop(self,mykey,source_dict,source_def,loop_name,on_miss):
    # process imssing
    if source_def not in source_dict:
        if on_miss == 'Exit':
            raise CifError('Missing definition for import %s' % source_def)
        else: return          #nothing else to do
    print('Adding %s attributes from %s to %s' % (loop_name,source_def,mykey))
    state_loop = source_dict[source_def].GetLoop(loop_name)
    self[mykey].insert_loop(state_loop)

Validation

A DDL provides lots of information that can be used to check a datafile or dictionary for consistency. Currently, the DDL-appropriate routines are installed at initialisation time.

<Validation routines>= (<-U)
<Install validation functions>
<Item-level validation>
<Loop-level validation>
<Cross-item validation>
<Block-level validation>
<Run validation tests>
<Optimisation on/off>

Each dictionary has a set of validation functions associated with it based on the information contained in the DDL. The following function is called on initialisation.

<Install validation functions>= (<-U)
def install_validation_functions(self):
    """Install the DDL-appropriate validation checks"""
    if self.diclang != 'DDLm':
        # functions which check conformance
        self.item_validation_funs = [
            self.validate_item_type,
            self.validate_item_esd,
            self.validate_item_enum,
            self.validate_enum_range,
            self.validate_looping
        ]
        # functions checking loop values
        self.loop_validation_funs = [
            self.validate_loop_membership,
            self.validate_loop_key,
            self.validate_loop_references
        ]
        # where we need to look at other values
        self.global_validation_funs = [
            self.validate_exclusion,
            self.validate_parent,
            self.validate_child,
            self.validate_dependents,
            self.validate_uniqueness
        ]
        # where only a full block will do
        self.block_validation_funs = [
            self.validate_mandatory_category
        ]
        # removal is quicker with special checks
        self.global_remove_validation_funs = [
            self.validate_remove_parent_child
        ]
    elif self.diclang == 'DDLm':
        self.item_validation_funs = [
            self.validate_item_enum,
            self.validate_item_esd_ddlm,
            ]
        self.loop_validation_funs = [
            self.validate_looping_ddlm,
            self.validate_loop_key_ddlm,
            self.validate_loop_membership
            ]
        self.global_validation_funs = []
        self.block_validation_funs = [
            self.check_mandatory_items,
            self.check_prohibited_items
            ]
        self.global_remove_validation_funs = []
    self.optimize = False        # default value
    self.done_parents = []
    self.done_children = []
    self.done_keys = []

Some things are independent of where an item occurs in the file; we check those things here. All functions are expected to return a dictionary with at least one key: "result", as well as optional keys depending on the type of error.

<Item-level validation>= (<-U)
<Validate the type of an item (DDL1/2)>
<Validate the type of an item (DDLm)>
<Validate esd presence>
<Validate enumeration range>
<Validate an enumeration>
<Validate looping properties>

Validate the type of an item

We use the expressions for type that we have available to check that the type of the item passed to us matches up. We may have a list of items, so be aware of that. We define a tiny matching function so that we do not have to do a double match to catch the non-matching case, which returns None and thus an attribute error if we immediately try to get a group.

Note also that none of the extant dictionaries use the 'none' or 'seq' values for type. The seq value in particular would complicate matters.

<Validate the type of an item (DDL1/2)>= (<-U)
def validate_item_type(self,item_name,item_value):
    def mymatch(m,a):
        res = m.match(a)
        if res != None: return res.group()
        else: return ""
    target_type = self[item_name].get(self.type_spec)
    if target_type == None:          # e.g. a category definition
        return {"result":True}                  # not restricted in any way
    matchexpr = self.typedic[target_type]
    item_values = listify(item_value)
    #for item in item_values:
        #print("Type match " + item_name + " " + item + ":",)
    #skip dots and question marks
    check_all = [a for a in item_values if a !="." and a != "?"]
    check_all = [a for a in check_all if mymatch(matchexpr,a) != a]
    if len(check_all)>0: return {"result":False,"bad_values":check_all}
    else: return {"result":True}

DDLm types are far more nuanced, and we are not provided with prepacked regular expressions in order to check them. We have identified the following checks: that the type is in the correct container; that the contents are as described in _type.contents; that 'State' purpose datanames have a list of enumerated states; that 'Link' purpose datanames have '_name.linked_item_id' in the same definition; that 'SU' purpose datanames also has the above.

<Validate the type of an item (DDLm)>= (<-U)
def decide(self,result_list):
    """Construct the return list"""
    if len(result_list)==0:
           return {"result":True}
    else:
           return {"result":False,"bad_values":result_list}

def validate_item_container(self, item_name,item_value):
    container_type = self[item_name]['_type.container']
    item_values = listify(item_value)
    if container_type == 'Single':
       okcheck = [a for a in item_values if not isinstance(a,(int,float,long,unicode))]
       return decide(okcheck)
    if container_type in ('Multiple','List'):
       okcheck = [a for a in item_values if not isinstance(a,StarList)]
       return decide(okcheck)
    if container_type == 'Array':    #A list with numerical values
       okcheck = [a for a in item_values if not isinstance(a,StarList)]
       first_check = decide(okcheck)
       if not first_check['result']: return first_check
       #num_check = [a for a in item_values if len([b for b in a if not isinstance

Esds. Numbers are sometimes not allowed to have esds appended. The default is that esds are not OK, and we should also skip anything that has character type, as that is automatically not a candidate for esds.

Note that we make use of the primitive type here; there are some cases where a string type looks like an esd, so unless we know we have a number we ignore these cases.

DDLm requires an esd if _type.purpose is Measurand, and should not have an esd if _type.purpose is Number.

<Validate esd presence>= (<-U)
def validate_item_esd(self,item_name,item_value):
    if self[item_name].get(self.primitive_type) != 'numb':
        return {"result":None}
    can_esd = self[item_name].get(self.esd_spec,"none") == "esd"
    if can_esd: return {"result":True}         #must be OK!
    item_values = listify(item_value)
    check_all = list([a for a in item_values if get_number_with_esd(a)[1] != None])
    if len(check_all)>0: return {"result":False,"bad_values":check_all}
    return {"result":True}

def validate_item_esd_ddlm(self,item_name,item_value):
    if self[item_name].get('self.primitive_type') not in \
    ['Count','Index','Integer','Real','Imag','Complex','Binary','Hexadecimal','Octal']:
        return {"result":None}
    can_esd = True
    if self[item_name].get('_type.purpose') != 'Measurand':
        can_esd = False
    item_values = listify(item_value)
    check_all = [get_number_with_esd(a)[1] for a in item_values]
    check_all = [v for v in check_all if (can_esd and v == None) or \
             (not can_esd and v != None)]
    if len(check_all)>0: return {"result":False,"bad_values":check_all}
    return {"result":True}

Enumeration ranges. Our dictionary has been prepared as for a DDL2 dictionary, where loops are used to specify closed or open ranges: if an entry exists where maximum and minimum values are equal, this means that this value is included in the range; otherwise, ranges are open. Our value is already numerical.

<Validate enumeration range>= (<-U)
def validate_enum_range(self,item_name,item_value):
    if "_item_range.minimum" not in self[item_name] and \
       "_item_range.maximum" not in self[item_name]:
        return {"result":None}
    minvals = self[item_name].get("_item_range.minimum",default = ["."])
    maxvals = self[item_name].get("_item_range.maximum",default = ["."])
    def makefloat(a):
        if a == ".": return a
        else: return float(a)
    maxvals = map(makefloat, maxvals)
    minvals = map(makefloat, minvals)
    rangelist = list(zip(minvals,maxvals))
    item_values = listify(item_value)
    def map_check(rangelist,item_value):
        if item_value == "?" or item_value == ".": return True
        iv,esd = get_number_with_esd(item_value)
        if iv==None: return None  #shouldn't happen as is numb type
        for lower,upper in rangelist:
            #check the minima
            if lower == ".": lower = iv - 1
            if upper == ".": upper = iv + 1
            if iv > lower and iv < upper: return True
            if upper == lower and iv == upper: return True
        # debug
        # print("Value %s fails range check %d < x < %d" % (item_value,lower,upper))
        return False
    check_all = [a for a in item_values if map_check(rangelist,a) != True]
    if len(check_all)>0: return {"result":False,"bad_values":check_all}
    else: return {"result":True}

Note that we must make a copy of the enum list, otherwise when we add in our ? and . they will modify the Cif in place, very sneakily, and next time we have a loop length check, e.g. in writing out, we will probably have a mismatch.

<Validate an enumeration>= (<-U)
def validate_item_enum(self,item_name,item_value):
    try:
        enum_list = self[item_name][self.enum_spec][:]
    except KeyError:
        return {"result":None}
    enum_list.append(".")   #default value
    enum_list.append("?")   #unknown
    item_values = listify(item_value)
    #print("Enum check: {!r} in {!r}".format(item_values, enum_list))
    check_all = [a for a in item_values if a not in enum_list]
    if len(check_all)>0: return {"result":False,"bad_values":check_all}
    else: return {"result":True}

Check that something can be looped. For DDL1 we have yes, no and both, For DDL2 there is no explicit restriction on looping beyond membership in a category. Note that the DDL1 language specifies a default value of 'no' for this item, so when not explicitly allowed by the dictionary, listing is prohibited. In DDLm, only members of 'Loop' categories allow looping. As we transition the whole setup to DDLm-type data structures, the two calls below will merge and move to the looping checks rather than the single item checks.

<Validate looping properties>= (<-U)
def validate_looping(self,item_name,item_value):
    try:
        must_loop = self[item_name][self.must_loop_spec]
    except KeyError:
        return {"result":None}
    if must_loop == 'yes' and isinstance(item_value,(unicode,str)): # not looped
        return {"result":False}      #this could be triggered
    if must_loop == 'no' and not isinstance(item_value,(unicode,str)):
        return {"result":False}
    return {"result":True}

def validate_looping_ddlm(self,loop_names):
    """Check that all names are loopable"""
    truly_loopy = self.get_final_cats(loop_names)
    if len(truly_loopy)<len(loop_names):  #some are bad
        categories = [(a,self[a][self.cat_spec].lower()) for a in loop_names]
        not_looped = [a[0] for a in categories if a[1] not in self.parent_lookup.keys()]
        return {"result":False,"bad_items":not_looped}
    return {"result":True}

And some things are related to the group structure. Note that these functions do not require knowledge of the item values.

<Loop-level validation>= (<-U)
<Validate loop membership>
<Validate loop key>
<Validate loop key DDLm>
<Validate loop mandatory items>
<Get alternative item names>

Loop membership. The most common constraints on a loop are that all items are from the same category, and that loops of a certain category must contain a certain key to be valid. The latter test should be performed after the former test.

DDLm allows nested loop categories, so an item from a child category can appear in a parent category loop if both are from 'Loop' categories.

<Validate loop membership>= (<-U)
def validate_loop_membership(self,loop_names):
    final_cat = self.get_final_cats(loop_names)
    bad_items =  [a for a in final_cat if a != final_cat[0]]
    if len(bad_items)>0:
        return {"result":False,"bad_items":bad_items}
    else: return {"result":True}

def get_final_cats(self,loop_names):
    """Return a list of the uppermost parent categories for the loop_names. Names
    that are not from loopable categories are ignored."""
    try:
        categories = [self[a][self.cat_spec].lower() for a in loop_names]
    except KeyError:       #category_id is mandatory
        raise ValidCifError( "%s missing from dictionary %s for item in loop containing %s" % (self.cat_spec,self.dicname,loop_names[0]))
    truly_looped = [a for a in categories if a in self.parent_lookup.keys()]
    return [self.parent_lookup[a] for a in truly_looped]

The items specified by _list_mandatory (DDL1) must be present in a loop containing items of a given category (and it follows that only one loop in a given data block is available for any category containing such an item). This has been explicitly described as a key in DDL2. In DDLm, any key from a parent looped category is acceptable as well as the key of the given category itself.

<Validate loop key>= (<-U)
def validate_loop_key(self,loop_names):
    category = self[loop_names[0]][self.cat_spec]
    # find any unique values which must be present
    key_spec = self[category].get(self.key_spec,[])
    for names_to_check in key_spec:
        if isinstance(names_to_check,unicode):   #only one
            names_to_check = [names_to_check]
        for loop_key in names_to_check:
            if loop_key not in loop_names:
                #is this one of those dang implicit items?
                if self[loop_key].get(self.must_exist_spec,None) == "implicit":
                    continue          #it is virtually there...
                alternates = self.get_alternates(loop_key)
                if alternates == []:
                    return {"result":False,"bad_items":loop_key}
                for alt_names in alternates:
                    alt = [a for a in alt_names if a in loop_names]
                    if len(alt) == 0:
                        return {"result":False,"bad_items":loop_key}  # no alternates
    return {"result":True}

Validating keys in DDLm. We move everything to the uppermost parent category, and then lookup what keys can be used. If any of these are present, we are happy. This might miss some subtleties in mixed or unmixed loops?

<Validate loop key DDLm>= (<-U)
def validate_loop_key_ddlm(self,loop_names):
    """Make sure at least one of the necessary keys are available"""
    final_cats = self.get_final_cats(loop_names)
    if len(final_cats)>0:
        poss_keys = self.cat_key_table[final_cats[0]][0] # 
        found_keys = [a for a in poss_keys if a in loop_names]
        if len(found_keys)>0:
            return {"result":True}
        else:
            return {"result":False,"bad_items":poss_keys}
    else:
        return {"result":True}

The _list_reference value specifies data names which must co-occur with the defined data name. We check that this is indeed the case for all items in the loop. We trace through alternate values as well. In DDL1 dictionaries, a name terminating with an underscore indicates that any(?) corresponding name is suitable.

<Validate loop mandatory items>= (<-U)
def validate_loop_references(self,loop_names):
    must_haves = [self[a].get(self.list_ref_spec,None) for a in loop_names]
    must_haves = [a for a in must_haves if a != None]
    # build a flat list.  For efficiency we don't remove duplicates,as
    # we expect no more than the order of 10 or 20 looped names.
    def flat_func(a,b):
        if isinstance(b,unicode):
           a.append(b)       #single name
        else:
           a.extend(b)       #list of names
        return a
    flat_mh = []
    [flat_func(flat_mh,a) for a in must_haves]
    group_mh = filter(lambda a:a[-1]=="_",flat_mh)
    single_mh = filter(lambda a:a[-1]!="_",flat_mh)
    res = [a for a in single_mh if a not in loop_names]
    def check_gr(s_item, name_list):
        nl = map(lambda a:a[:len(s_item)],name_list)
        if s_item in nl: return True
        return False
    res_g = [a for a in group_mh if check_gr(a,loop_names)]
    if len(res) == 0 and len(res_g) == 0: return {"result":True}
    # construct alternate list
    alternates = map(lambda a: (a,self.get_alternates(a)),res)
    alternates = [a for a in alternates if a[1] != []]
    # next line purely for error reporting
    missing_alts = [a[0] for a in alternates if a[1] == []]
    if len(alternates) != len(res):
       return {"result":False,"bad_items":missing_alts}   #short cut; at least one
                                                   #doesn't have an altern
    #loop over alternates
    for orig_name,alt_names in alternates:
         alt = [a for a in alt_names if a in loop_names]
         if len(alt) == 0: return {"result":False,"bad_items":orig_name}# no alternates
    return {"result":True}        #found alternates

A utility function to return a list of alternate names given a main name. In DDL2 we have to deal with aliases. Each aliased item appears in our normalised dictionary independently, so there is no need to resolve aliases when looking up a data name. However, the original definition using DDL2-type names is simply copied to this aliased name during normalisation, so all references to other item names (e.g. _item_dependent) have to be resolved using the present function.

These aliases are returned in any case, so if we had a data file which mixed DDL1 and DDL2 style names, it may turn out to be valid, and what's more, we wouldn't necessarily detect an error if a data name and its alias were present - need to ponder this.

The exclusive_only option will only return items which must not co-exist with the item name in the same datablock. This includes aliases, and allows us to do a check that items and their aliases are not present at the same time in a data file.

<Get alternative item names>= (<-U)
def get_alternates(self,main_name,exclusive_only=False):
    alternates = self[main_name].get(self.related_func,None)
    alt_names = []
    if alternates != None:
        alt_names =  self[main_name].get(self.related_item,None)
        if isinstance(alt_names,unicode):
            alt_names = [alt_names]
            alternates = [alternates]
        together = zip(alt_names,alternates)
        if exclusive_only:
            alt_names = [a for a in together if a[1]=="alternate_exclusive" \
                                         or a[1]=="replace"]
        else:
            alt_names = [a for a in together if a[1]=="alternate" or a[1]=="replace"]
        alt_names = list([a[0] for a in alt_names])
    # now do the alias thing
    alias_names = listify(self[main_name].get("_item_aliases.alias_name",[]))
    alt_names.extend(alias_names)
    # print("Alternates for {}: {!r}".format(main_name, alt_names))
    return alt_names

Some checks require access to the entire data block. These functions take both a provisional dictionary and a global dictionary; the provisional dictionary includes items which will go into the dictionary together with the current item, and the global dictionary includes items which apply to all data blocks (this is for validation of DDL1/2 dictionaries).

<Cross-item validation>= (<-U)
<Validate exclusion rules>
<Validate parent child relations>
<Validate presence of dependents>
<Validate list uniqueness>

DDL2 dictionaries introduce the "alternate exclusive" category for related items. We also unilaterally include items listed in aliases as acting in this way.

<Validate exclusion rules>= (<-U)
def validate_exclusion(self,item_name,item_value,whole_block,provisional_items={},globals={}):
   alternates = [a.lower() for a in self.get_alternates(item_name,exclusive_only=True)]
   item_name_list = [a.lower() for a in whole_block.keys()]
   item_name_list.extend([a.lower() for a in provisional_items.keys()])
   bad = [a for a in alternates if a in item_name_list]
   if len(bad)>0:
       print("Bad: %s, alternates %s" % (repr(bad),repr(alternates)))
       return {"result":False,"bad_items":bad}
   else: return {"result":True}

When validating parent/child relations, we check the parent link to the children, and separately check that parents exist for any children present. Switching on optimisation will remove the redundancy in this procedure, but only if no changes are made to the relevant data items between the two checks.

It appears that DDL2 dictionaries allow parents to be absent if children take only unspecified values (i.e. dot or question mark). We catch this case.

The provisional items dictionary includes items that are going to be included with the present item (in a single loop structure) so the philosophy of inclusion must be all or nothing.

When validating DDL2 dictionaries themselves, we are allowed access to other definition blocks in order to resolve parent-child pointers. We will be able to find these save frames inside the globals dictionary (they will in this case be collected inside a CifBlock object).

When removing, we look at the item to make sure that no child items require it to be present.

<Validate parent child relations>= (<-U)
# validate that parent exists and contains matching values
def validate_parent(self,item_name,item_value,whole_block,provisional_items={},globals={}):
    parent_item = self[item_name].get(self.parent_spec)
    if not parent_item: return {"result":None}   #no parent specified
    if isinstance(parent_item,list):
        parent_item = parent_item[0]
    if self.optimize:
        if parent_item in self.done_parents:
            return {"result":None}
        else:
            self.done_parents.append(parent_item)
            print("Done parents %s" % repr(self.done_parents))
    # initialise parent/child values
    if isinstance(item_value,unicode):
        child_values = [item_value]
    else: child_values = item_value[:]    #copy for safety
    # track down the parent
    # print("Looking for {} parent item {} in {!r}".format(item_name, parent_item, whole_block))
    # if globals contains the parent values, we are doing a DDL2 dictionary, and so
    # we have collected all parent values into the global block - so no need to search
    # for them elsewhere.
    # print("Looking for {!r}".format(parent_item))
    parent_values = globals.get(parent_item)
    if not parent_values:
        parent_values = provisional_items.get(parent_item,whole_block.get(parent_item))
    if not parent_values:
        # go for alternates
        namespace = whole_block.keys()
        namespace.extend(provisional_items.keys())
        namespace.extend(globals.keys())
        alt_names = filter_present(self.get_alternates(parent_item),namespace)
        if len(alt_names) == 0:
            if len([a for a in child_values if a != "." and a != "?"])>0:
                return {"result":False,"parent":parent_item}#no parent available -> error
            else:
                return {"result":None}       #maybe True is more appropriate??
        parent_item = alt_names[0]           #should never be more than one??
        parent_values = provisional_items.get(parent_item,whole_block.get(parent_item))
        if not parent_values:   # check global block
            parent_values = globals.get(parent_item)
    if isinstance(parent_values,unicode):
        parent_values = [parent_values]
    #print("Checking parent %s against %s, values %r/%r" % (parent_item,
    #                                          item_name, parent_values, child_values))
    missing = self.check_parent_child(parent_values,child_values)
    if len(missing) > 0:
        return {"result":False,"bad_values":missing,"parent":parent_item}
    return {"result":True}

def validate_child(self,item_name,item_value,whole_block,provisional_items={},globals={}):
    try:
        child_items = self[item_name][self.child_spec][:]  #copy
    except KeyError:
        return {"result":None}    #not relevant
    # special case for dictionaries  -> we check parents of children only
    if item_name in globals:  #dictionary so skip
        return {"result":None}
    if isinstance(child_items,unicode): # only one child
        child_items = [child_items]
    if isinstance(item_value,unicode): # single value
        parent_values = [item_value]
    else: parent_values = item_value[:]
    # expand child list with list of alternates
    for child_item in child_items[:]:
        child_items.extend(self.get_alternates(child_item))
    # now loop over the children
    for child_item in child_items:
        if self.optimize:
            if child_item in self.done_children:
                return {"result":None}
            else:
                self.done_children.append(child_item)
                print("Done children %s" % repr(self.done_children))
        if child_item in provisional_items:
            child_values = provisional_items[child_item][:]
        elif child_item in whole_block:
            child_values = whole_block[child_item][:]
        else:  continue
        if isinstance(child_values,unicode):
            child_values = [child_values]
            # print("Checking child %s against %s, values %r/%r" % (child_item,
            #       item_name, child_values, parent_values))
        missing = self.check_parent_child(parent_values,child_values)
        if len(missing)>0:
            return {"result":False,"bad_values":missing,"child":child_item}
    return {"result":True}       #could mean that no child items present

#a generic checker: all child vals should appear in parent_vals
def check_parent_child(self,parent_vals,child_vals):
    # shield ourselves from dots and question marks
    pv = parent_vals[:]
    pv.extend([".","?"])
    res =  [a for a in child_vals if a not in pv]
    #print("Missing: %s" % res)
    return res

def validate_remove_parent_child(self,item_name,whole_block):
    try:
        child_items = self[item_name][self.child_spec]
    except KeyError:
        return {"result":None}
    if isinstance(child_items,unicode): # only one child
        child_items = [child_items]
    for child_item in child_items:
        if child_item in whole_block:
            return {"result":False,"child":child_item}
    return {"result":True}

The DDL2 _item_dependent attribute at first glance appears to be the same as _list_reference, however the dependent item does not have to appear in a loop at all, and neither does the other item name. Perhaps this behaviour was intended to be implied by having looped _names in DDL1 dictionaries, but we can't be sure and so don't implement this yet.

<Validate presence of dependents>= (<-U)
def validate_dependents(self,item_name,item_value,whole_block,prov={},globals={}):
    try:
        dep_items = self[item_name][self.dep_spec][:]
    except KeyError:
        return {"result":None}    #not relevant
    if isinstance(dep_items,unicode):
        dep_items = [dep_items]
    actual_names = whole_block.keys()
    actual_names.extend(prov.keys())
    actual_names.extend(globals.keys())
    missing = [a for a in dep_items if a not in actual_names]
    if len(missing) > 0:
        alternates = map(lambda a:[self.get_alternates(a),a],missing)
        # compact way to get a list of alternative items which are
        # present
        have_check = [(filter_present(b[0],actual_names),
                                   b[1]) for b in alternates]
        have_check = list([a for a in have_check if len(a[0])==0])
        if len(have_check) > 0:
            have_check = [a[1] for a in have_check]
            return {"result":False,"bad_items":have_check}
    return {"result":True}

The _list_uniqueness attribute permits specification of a single or multiple items which must have a unique combined value. Currently it is only used in the powder dictionary to indicate that peaks must have a unique index and in the core dictionary to indicate the a publication section name with its label must be unique; however it would appear to implicitly apply to any index-type value in any dictionary. This is used precisely once in the cif_core dictionary in a non-intuitive manner, but we code for this here. The value of the _list_uniqueness attribute can actually refer to another data name, which together with the defined name must be unique.

DDL2 dictionaries do away with separate _list_mandatory and _list_uniqueness attributes, instead using a _category_key. If multiple keys are specified, they must be unique in combination, in accordance with standard relational database behaviour.

<Validate list uniqueness>= (<-U)
def validate_uniqueness(self,item_name,item_value,whole_block,provisional_items={},
                                                              globals={}):
    category = self[item_name].get(self.cat_spec)
    if category == None:
        print("No category found for %s" % item_name)
        return {"result":None}
    # print("Category {!r} for item {}".format(category, item_name))
    # we make a copy in the following as we will be removing stuff later!
    unique_i = self[category].get("_category_key.name",[])[:]
    if isinstance(unique_i,unicode):
        unique_i = [unique_i]
    if item_name not in unique_i:       #no need to verify
        return {"result":None}
    if isinstance(item_value,unicode):  #not looped
        return {"result":None}
    # print("Checking %s -> %s -> %s ->Unique: %r" % (item_name,category,catentry, unique_i))
    # check that we can't optimize by not doing this check
    if self.optimize:
        if unique_i in self.done_keys:
            return {"result":None}
        else:
            self.done_keys.append(unique_i)
    val_list = []
    # get the matching data from any other data items
    unique_i.remove(item_name)
    other_data = []
    if len(unique_i) > 0:            # i.e. do have others to think about
       for other_name in unique_i:
       # we look for the value first in the provisional dict, then the main block
       # the logic being that anything in the provisional dict overrides the
       # main block
           if other_name in provisional_items:
               other_data.append(provisional_items[other_name])
           elif other_name in whole_block:
               other_data.append(whole_block[other_name])
           elif self[other_name].get(self.must_exist_spec)=="implicit":
               other_data.append([item_name]*len(item_value))  #placeholder
           else:
               return {"result":False,"bad_items":other_name}#missing data name
    # ok, so we go through all of our values
    # this works by comparing lists of strings to one other, and
    # so could be fooled if you think that '1.' and '1' are
    # identical
    for i in range(len(item_value)):
        #print("Value no. %d" % i, end=" ")
        this_entry = item_value[i]
        for j in range(len(other_data)):
            this_entry = " ".join([this_entry,other_data[j][i]])
        #print("Looking for {!r} in {!r}: ".format(this_entry, val_list))
        if this_entry in val_list:
            return {"result":False,"bad_values":this_entry}
        val_list.append(this_entry)
    return {"result":True}

<Block-level validation>= (<-U)
<Validate category presence>
<Process DDLm mandatory information>
<Process DDLm prohibited information>

DDL2 introduces a new idea, that of a mandatory category, items of which must be present. We check only this particular fact, and leave the checks for mandatory items within the category, keys etc. to the relevant routines. This would appear to be applicable to dictionaries only.

Also, although the natural meaning for a DDL2 dictionary would be that items from these categories must appear in every definition block, this is not what happens in practice, as category definitions do not have anything from the (mandatory) _item_description category. We therefore adopt the supremely useless meaning that mandatory categories in a dictionary context mean only that somewhere, maybe in only one save frame, an item from this category exists. This interpretation is forced by using the "fake_mand" argument, which then assumes that the alternative routine will be used to set the error information on a dictionary-wide basis.

<Validate category presence>= (<-U)
def validate_mandatory_category(self,whole_block):
    mand_cats = [self[a]['_category.id'] for a in self.keys() if self[a].get("_category.mandatory_code","no")=="yes"]
    if len(mand_cats) == 0:
        return {"result":True}
    # print("Mandatory categories - {!r}".format(mand_cats)
    # find which categories each of our datanames belongs to
    all_cats = [self[a].get(self.cat_spec) for a in whole_block.keys()]
    missing = set(mand_cats) - set(all_cats)
    if len(missing) > 0:
        return {"result":False,"bad_items":repr(missing)}
    return {"result":True}

Processing DDLm mandatory categories/items

DDLm manages mandatory items by providing a table in the DDLm dictionary which classifies datanames into mandatory/recommended/prohibited for dictionary, category or item scopes. Note that the following check might fail for categories and dictionaries if '_definition.scope' or '_dictionary.title' is missing.

<Process DDLm mandatory information>= (<-U)
def check_mandatory_items(self,whole_block,default_scope='Item'):
    """Return an error if any mandatory items are missing"""
    if len(self.scopes_mandatory)== 0: return {"result":True}
    if default_scope == 'Datablock':
        return {"result":True}     #is a data file
    scope = whole_block.get('_definition.scope',default_scope)
    if '_dictionary.title' in whole_block:
       scope = 'Dictionary'
    missing = list([a for a in self.scopes_mandatory[scope] if a not in whole_block])
    if len(missing)==0:
        return {"result":True}
    else:
        return {"result":False,"bad_items":missing}

<Process DDLm prohibited information>= (<-U)
def check_prohibited_items(self,whole_block,default_scope='Item'):
    """Return an error if any prohibited items are present"""
    if len(self.scopes_naughty)== 0: return {"result":True}
    if default_scope == 'Datablock':
        return {"result":True}     #is a data file
    scope = whole_block.get('_definition.scope',default_scope)
    if '_dictionary.title' in whole_block:
       scope = 'Dictionary'
    present = list([a for a in self.scopes_naughty[scope] if a in whole_block])
    if len(present)==0:
        return {"result":True}
    else:
        return {"result":False,"bad_items":present}

These validation checks are intended to be called externally. They return a dictionary keyed by item name with value being a list of the results of the check functions. The individual functions return a dictionary which contains at least the key "result", and in case of error relevant keys relating to the error.

<Run validation tests>= (<-U)
def run_item_validation(self,item_name,item_value):
    return {item_name:list([(f.__name__,f(item_name,item_value)) for f in self.item_validation_funs])}

def run_loop_validation(self,loop_names):
    return {loop_names[0]:list([(f.__name__,f(loop_names)) for f in self.loop_validation_funs])}

def run_global_validation(self,item_name,item_value,data_block,provisional_items={},globals={}):
    results = list([(f.__name__,f(item_name,item_value,data_block,provisional_items,globals)) for f in self.global_validation_funs])
    return {item_name:results}

def run_block_validation(self,whole_block,block_scope='Item'):
    results = list([(f.__name__,f(whole_block)) for f in self.block_validation_funs])
    # fix up the return values
    return {"whole_block":results}

Optimization: the dictionary validation routines normally retain no history of what has been checked, as they are executed on a per-item basis. This leads to duplication of the uniqueness check, when there is more than one key, and duplication of the parent-child check, once for the parent and once for the child. By switching on optimisation, a record is kept and these checks will not be repeated. This is safe only if none of the relevant items is altered while optimisation is on, and optimisation should be switched off as soon as all the checks are finished.

<Optimisation on/off>= (<-U)
def optimize_on(self):
    self.optimize = True
    self.done_keys = []
    self.done_children = []
    self.done_parents = []

def optimize_off(self):
    self.optimize = False
    self.done_keys = []
    self.done_children = []
    self.done_parents = []

Preparing our type expressions

In DDL2 dictionaries our type expressions are given in the main block as POSIX regexps, so we can pass them on to the re package. For DDL1 dictionaries we could get them from the DDL1 language definition, but for now we just hard code them. Essentially only the number definition is important, as the syntax check during reading/writing will catch any char violations.

Note that the python re engine is not POSIX compliant in that it will not return the longest leftmost match, but rather the first leftmost match. John Bollinger suggested an obvious fix: we append a $ to force a full match.

In other regexp editing, the \{ sequence inside the character sets of some of the regexps is actually interpreted as an escaped bracket, so the backslash vanishes. We add it back in by doing a very hackish and ugly substitution which substitues these two characters anywhere that they occur inside square brackets. A final change is to insert a \r wherever we find a \n - it seems that this has been left out. After these changes, and appending on default expressions as well, we can now work with DDL2 expressions directly.

We keep the primitive code for the single reason that we need to know when we are dealing with a number that has an esd appended, and this is flagged by the primitive code being of type 'numb'.

<Add type information>= (<-U)
def add_type_info(self):
    if "_item_type_list.construct" in self.master_block:
        types = self.master_block["_item_type_list.code"]
        prim_types = self.master_block["_item_type_list.primitive_code"]
        constructs = list([a + "$" for a in self.master_block["_item_type_list.construct"]])
        # add in \r wherever we see \n, and change \{ to \\{
        def regex_fiddle(mm_regex):
            brack_match = r"((.*\[.+)(\\{)(.*\].*))"
            ret_match = r"((.*\[.+)(\\n)(.*\].*))"
            fixed_regexp = mm_regex[:]  #copy
            # fix the brackets
            bm = re.match(brack_match,mm_regex)
            if bm != None:
                fixed_regexp = bm.expand(r"\2\\\\{\4")
            # fix missing \r
            rm = re.match(ret_match,fixed_regexp)
            if rm != None:
                fixed_regexp = rm.expand(r"\2\3\\r\4")
            #print("Regexp %s becomes %s" % (mm_regex,fixed_regexp))
            return fixed_regexp
        constructs = map(regex_fiddle,constructs)
        for typecode,construct in zip(types,constructs):
            self.typedic[typecode] = re.compile(construct,re.MULTILINE|re.DOTALL)
        # now make a primitive <-> type construct mapping
        for typecode,primtype in zip(types,prim_types):
            self.primdic[typecode] = primtype

Linkage to dREL

The drel_ast_yacc package will generate an Abstract Syntax Tree, which we then convert to a Python function using py_from_ast.make_function. We use it during initialisation to transform all methods to python expressions, and then the derive_item method will use this to try to derive the expression. Note that newline is the only recognised statement separator in dREL, so we make sure all lines are separated in this way. We also allow multiple 'Evaluation' methods, which is an enhancement of the current standard.

The make_function function requires dictionary information to be supplied regarding looped categories and keys.

If we were really serious about dictionary-driven software, the attribute lookups that follow would not use get(), but square brackets and allow default values to be returned. However, that would require assigning a dictionary to the dictionary and consequent automated searches which I cannot be bothered to do at this stage. Just be aware that the default value in the get() statement is the _enumeration.default specified in ddl.dic.

<Linkage to dREL>= (<-U)
<Initialise dREL functions>
<Transform drel to python>
<Store dREL functions>
<Derive item information>
<Storing a dREL-derived value>
<Construct a category>
<Insert category items from pullback information>
<Storing a whole new dREL-derived category>
<Generating default packets>

Full initialisation. This can take some time so we optionally skip it, but can call this function separately at a later stage if needed.

<Initialise dREL functions>= (<-U)
def initialise_drel(self):
    """Parse drel functions and prepare data structures in dictionary"""
    self.ddlm_parse_valid() #extract validity information from data block
    self.transform_drel()   #parse the drel functions
    self.add_drel_funcs()   #put the drel functions into the namespace

<Transform drel to python>= (<-U)
def transform_drel(self):
    from .drel import drel_ast_yacc
    from .drel import py_from_ast
    import traceback
    parser = drel_ast_yacc.parser
    lexer = drel_ast_yacc.lexer
    my_namespace = self.keys()
    my_namespace = dict(zip(my_namespace,my_namespace))
    # we provide a table of loopable categories {cat_name:((key1,key2..),[item_name,...]),...})
    loopable_cats = self.get_loopable_cats()
    loop_keys = [listify(self[a]["_category_key.name"]) for a in loopable_cats]
    loop_keys = [[self[a]['_name.object_id'] for a in b] for b in loop_keys]
    cat_names = [self.names_in_cat(a,names_only=True) for a in loopable_cats]
    loop_info = dict(zip(loopable_cats,zip(loop_keys,cat_names)))
    # parser.listable_items = [a for a in self.keys() if "*" in self[a].get("_type.dimension","")]
    derivable_list = [a for a in self.keys() if "_method.expression" in self[a] \
                          and self[a].get("_name.category_id","")!= "function"]
    for derivable in derivable_list:
        target_id = derivable
        # reset the list of visible names for parser
        special_ids = [dict(zip(self.keys(),self.keys()))]
        print("Target id: %s" % derivable)
        drel_exprs = self[derivable]["_method.expression"]
        drel_purposes = self[derivable]["_method.purpose"]
        all_methods = []
        if not isinstance(drel_exprs,list):
            drel_exprs = [drel_exprs]
            drel_purposes = [drel_purposes]
        for drel_purpose,drel_expr in zip(drel_purposes,drel_exprs):
            if drel_purpose != 'Evaluation':
                continue
            drel_expr = "\n".join(drel_expr.splitlines())
            # print("Transforming %s" % drel_expr)
            # List categories are treated differently...
            try:
                meth_ast = parser.parse(drel_expr+"\n",lexer=lexer)
            except:
                print('Syntax error in method for %s; leaving as is' % derivable)
                a,b = sys.exc_info()[:2]
                print((repr(a),repr(b)))
                print(traceback.print_tb(sys.exc_info()[-1],None,sys.stdout))
                # reset the lexer
                lexer.begin('INITIAL')
                continue
            # Construct the python method
            cat_meth = False
            if self[derivable].get('_definition.scope','Item') == 'Category':
                cat_meth = True
            pyth_meth = py_from_ast.make_python_function(meth_ast,"pyfunc",target_id,
                                                                       loopable=loop_info,
                                                         cif_dic = self,cat_meth=cat_meth)
            all_methods.append(pyth_meth)
        if len(all_methods)>0:
            save_overwrite = self[derivable].overwrite
            self[derivable].overwrite = True
            self[derivable]["_method.py_expression"] = all_methods
            self[derivable].overwrite = save_overwrite
        #print("Final result:\n " + repr(self[derivable]["_method.py_expression"]))

Drel functions are all stored in category 'functions' in our final dictionary. We want to convert them to executable python code and store them in an appropriate namespace which we can then pass to our individual item methods. As dREL accepts only linefeed as a terminator, we convert the input text as required.

<Store dREL functions>= (<-U)
def add_drel_funcs(self):
    from .drel import drel_ast_yacc
    from .drel import py_from_ast
    funclist = [a for a in self.keys() if self[a].get("_name.category_id","")=='function']
    funcnames = [(self[a]["_name.object_id"],
                  getattr(self[a].GetKeyedPacket("_method.purpose","Evaluation"),"_method.expression")) for a in funclist]
    # create executable python code...
    parser = drel_ast_yacc.parser
    # we provide a table of loopable categories {cat_name:(key,[item_name,...]),...})
    loopable_cats = self.get_loopable_cats()
    loop_keys = [listify(self[a]["_category_key.name"]) for a in loopable_cats]
    loop_keys = [[self[a]['_name.object_id'] for a in b] for b in loop_keys]
    cat_names = [self.names_in_cat(a,names_only=True) for a in loopable_cats]
    loop_info = dict(zip(loopable_cats,zip(loop_keys,cat_names)))
    for funcname,funcbody in funcnames:
        newline_body = "\n".join(funcbody.splitlines())
        parser.target_id = funcname
        res_ast = parser.parse(newline_body)
        py_function = py_from_ast.make_python_function(res_ast,None,targetname=funcname,func_def=True,loopable=loop_info,cif_dic = self)
        #print('dREL library function ->\n' + py_function)
        global_table = globals()
        exec(py_function, global_table)    #add to namespace
    #print('Globals after dREL functions added:' + repr(globals()))
    self.ddlm_functions = globals()  #for outside access

When a dictionary is available during CIF file access, we can resolve a missing dataname in four ways: (1) check if it is defined under an alias; (2) use a dREL method to calculate the value; (3) use default values if defined. We resolve in this priority. Note that we also convert to the appropriate type. A subsection of (2) is that, if the entire category is missing, we can either use DDLm category construction information or a category method to find our values; we only do this if no items in the category are present. We raise a StarDerivationError if we cannot derive the item, and internally we set result to None as we go through the various ways of deriving the item.

The store_value flag asks us to update the ciffile object with the new value. We remove any numpy dependencies before doing this, which means that we must recreate the numpy type when returning it.

The allow_defaults flag allows default values to be derived. In a situation where multiple methods are available for deriving an item, a calculation that accepts default values will return incorrect values in any situation where an alternative calculation method would have given correct values. For example, if the default value of axis.vector[n] is 0, but I can use an alternative derivation for axis.vector from a different setting, then a calculation that creates axis.vector from the components will give the wrong answer as it will fill in default values when the components are missing. The track_recursion decorator code handles this by propagating the initial value of allow_defaults to nested calls.

<Derive item information>= (<-U)
@track_recursion
def derive_item(self,start_key,cifdata,store_value = False,allow_defaults=True):
    key = start_key   #starting value
    result = None     #success is a non-None value
    default_result = False #we have not used a default value
    <Resolve using aliases>
    the_category = self[key]["_name.category_id"]
    cat_names = [a for a in self.keys() if self[a].get("_name.category_id",None)==the_category]
    has_cat_names = [a for a in cat_names if cifdata.has_key_or_alias(a)]
    # store any default value in case we have a problem
    def_val = self[key].get("_enumeration.default","")
    def_index_val = self[key].get("_enumeration.def_index_id","")
    if len(has_cat_names)==0: # try category method
        <Populate a category>
    # Recalculate in case it actually worked
    has_cat_names = [a for a in cat_names if cifdata.has_key_or_alias(a)]
    the_funcs = self[key].get('_method.py_expression',"")
    if the_funcs:   #attempt to calculate it
        <Execute pythonised dREL method>
    if result is None and allow_defaults:   # try defaults
        <Work out default value of dataname>
    # read it in
    if result is None:   #can't do anything else
        print('Warning: no way of deriving item %s, allow_defaults is %s' % (key,repr(allow_defaults)))
        raise StarFile.StarDerivationError(start_key)
    <Adjust value to be appropriate length>
    # now try to insert the new information into the right place
    # find if items of this category already appear...
    # Never cache empty values
    if not (isinstance(result,list) and len(result)==0) and\
      store_value:
        if self[key].get("_definition.scope","Item")=='Item':
            if is_looped:
                result = self.store_new_looped_value(key,cifdata,result,default_result)
            else:
                result = self.store_new_unlooped_value(key,cifdata,result)
        else:
            self.store_new_cat_values(cifdata,result,the_category)
    return result

Adjusting our calculated value. If we have used a default value or we have None, we need to make the dimension match the currently-existing length of the category.

<Adjust value to be appropriate length>= (<-U)
is_looped = False
if self[the_category].get('_definition.class','Set')=='Loop':
    is_looped = True
    if len(has_cat_names)>0:   #this category already exists
        if result is None or default_result: #need to create a list of values
            loop_len = len(cifdata[has_cat_names[0]])
            out_result = [result]*loop_len
            result = out_result
    else:   #nothing exists in this category, we can't store this at all
        print('Resetting result %s for %s to null list as category is empty' % (key,result))
        result = []

Storing a dREL-derived value back into our CifFile. The dREL value (or potentially a simple default value) may correspond to an entire column, or even an entire loop for category methods. We have to distinguish between list values that are StarLists, that is, a single CIF value, and list values that correspond to a column of a loop. Additionally, testing has revealed that we cannot judge the type of elements in a list by the first element (e.g. could be a plain list, then a numpy array).

The conv_from_numpy mini-functions are designed to handle arbitrary numpy arrays quickly.

<Storing a dREL-derived value>= (<-U)
def store_new_looped_value(self,key,cifdata,result,default_result):
      """Store a looped value from the dREL system into a CifFile"""
      # try to change any matrices etc. to lists
      the_category = self[key]["_name.category_id"]
      out_result = result
      if result is not None and not default_result:
              # find any numpy arrays
              def conv_from_numpy(one_elem):
                  if not hasattr(one_elem,'dtype'):
                     if isinstance(one_elem,(list,tuple)):
                        return StarFile.StarList([conv_from_numpy(a) for a in one_elem])
                     return one_elem
                  if one_elem.size > 1:   #so is not a float
                     return StarFile.StarList([conv_from_numpy(a) for a in one_elem.tolist()])
                  else:
                      try:
                        return one_elem.item(0)
                      except:
                        return one_elem
              out_result = [conv_from_numpy(a) for a in result]
      # so out_result now contains a value suitable for storage
      cat_names = [a for a in self.keys() if self[a].get("_name.category_id",None)==the_category]
      has_cat_names = [a for a in cat_names if a in cifdata]
      print('Adding {}, found pre-existing names: '.format(key) + repr(has_cat_names))
      if len(has_cat_names)>0:   #this category already exists
          cifdata[key] = out_result      #lengths must match or else!!
          cifdata.AddLoopName(has_cat_names[0],key)
      else:
          cifdata[key] = out_result
          cifdata.CreateLoop([key])
      print('Loop info:' + repr(cifdata.loops))
      return out_result

def store_new_unlooped_value(self,key,cifdata,result):
      """Store a single value from the dREL system"""
      if result is not None and hasattr(result,'dtype'):
          if result.size > 1:
              out_result = StarFile.StarList(result.tolist())
              cifdata[key] = out_result
          else:
              cifdata[key] = result.item(0)
      else:
          cifdata[key] = result
      return result

Storing category results. dREL allows 'category methods', which initialise an entire category. The dREL system that we have written returns a dictionary of lists, with the dictionary keys being item names. It is sufficient for us to extract each of these names and pass them to our normal storage routine. If some of the values in the category key are duplicated, we bail, as we may overwrite previous values. We also bail if we do not have exactly the same datanames available, as we are too lazy to insert 'unknown' in the non-matching positions.

<Storing a whole new dREL-derived category>= (<-U)
def store_new_cat_values(self,cifdata,result,the_category):
    """Store the values in [[result]] into [[cifdata]]"""
    the_key = [a for a in result.keys() if self[a].get('_type.purpose','')=='Key']
    double_names = [a for a in result.keys() if a in cifdata]
    if len(double_names)>0:
        already_present = [a for a in self.names_in_cat(the_category) if a in cifdata]
        if set(already_present) != set(result.keys()):
            print("Category %s not updated, mismatched datanames: %s" % (the_category, repr(set(already_present)^set(result.keys()))))
            return
        #check key values
        old_keys = set(cifdata[the_key])
        common_keys = old_keys & set(result[the_key])
        if len(common_keys)>0:
            print("Category %s not updated, key values in common:" % (common_keys))
            return
        #extend result values with old values
        for one_name,one_value in result.items():
            result[one_name].extend(cifdata[one_name])
    for one_name, one_value in result.items():
        try:
            self.store_new_looped_value(one_name,cifdata,one_value,False)
        except StarFile.StarError:
            print('%s: Not replacing %s with calculated %s' % (one_name,repr(cifdata[one_name]),repr(one_value)))
    #put the key as the first item
    print('Fixing item order for {}'.format(repr(the_key)))
    for one_key in the_key:  #should only be one
        cifdata.ChangeItemOrder(one_key,0)


Executing a dREL method. The execution defines a function, 'pyfunc' which is then itself executed in global scope. This has caused us some grief in order to get the bindings right (e.g. having StarList in scope). Essentially, anything that the method might refer to should be in scope at this point, otherwise the way Python works it will be too late to have things in scope within the enclosing routine that calls this function. Importing the necessary modules at the beginning of the module file (as done here) seems to be a reliable way to go.

<Execute pythonised dREL method>= (<-U)
#global_table = globals()
#global_table.update(self.ddlm_functions)
for one_func in the_funcs:
    print('Executing function for %s:' % key)
    #print(one_func)
    exec(one_func, globals())  #will access dREL functions, puts "pyfunc" in scope
    # print('in following global environment: ' + repr(global_table))
    stored_setting = cifdata.provide_value
    cifdata.provide_value = True
    try:
        result = pyfunc(cifdata)
    except CifRecursionError as s:
        print(s)
        result = None
    except StarFile.StarDerivationError as s:
        print(s)
        result = None
    finally:
        cifdata.provide_value = stored_setting
    if result is not None:
        break
    #print("Function returned {!r}".format(result))

Creating categories

A category can be created from scratch (i.e. the identifiers produced) if the appropriate DDLm attributes are defined - currently, experimental attributes 'category_construct_local' are included in the test dictionaries for this purpose. They define two types of 'pullback' (see any category theory textbook), which we can use to create a category. If these attributes are absent, we can instead execute a category method. We only add any new category items calculated in this way if the category does not exist or (i) the category IDs are not already present and (ii) the set of attributes calculated is an exact match for the set of datanames already present.

<Populate a category>= (<-U)
cat_result = {}
pulled_from_cats = [k for k in self.keys() if '_category_construct_local.components' in self[k]]
pulled_from_cats = [(k,[
                      self[n]['_name.category_id'] for n in self[k]['_category_construct_local.components']]
                   ) for k in pulled_from_cats]
pulled_to_cats = [k[0] for k in pulled_from_cats if the_category in k[1]]
if '_category_construct_local.type' in self[the_category]:
    print("**Now constructing category %s using DDLm attributes**" % the_category)
    try:
        cat_result = self.construct_category(the_category,cifdata,store_value=True)
    except (CifRecursionError,StarFile.StarDerivationError):
        print('** Failed to construct category %s (error)' % the_category)
# Trying a pull-back when the category is partially populated
# will not work, hence we test that cat_result has no keys
if len(pulled_to_cats)>0 and len(cat_result)==0:
    print("**Now populating category %s from pulled-back category %s" % (the_category,repr(pulled_to_cats)))
    try:
        cat_result = self.push_from_pullback(the_category,pulled_to_cats,cifdata,store_value=True)
    except (CifRecursionError,StarFile.StarDerivationError):
        print('** Failed to construct category %s from pullback information (error)' % the_category)
if '_method.py_expression' in self[the_category] and key not in cat_result:
    print("**Now applying category method for %s in search of %s**" % (the_category,key))
    cat_result = self.derive_item(the_category,cifdata,store_value=True)
print("**Tried pullbacks, obtained for %s " % the_category + repr(cat_result))
# do we now have our value?
if key in cat_result:
    return cat_result[key]

Constructing categories using DDLm attributes. We have defined local attributes that describe category construction using mathematical 'pullbacks'. We can use these to fill a category, but also to populate a category if the pullback category is available. We use list to coerce all values to a list in case we are passed a numpy array, which does not have an 'index' method.

<Construct a category>= (<-U)
def construct_category(self,category,cifdata,store_value=True):
    """Construct a category using DDLm attributes"""
    con_type = self[category].get('_category_construct_local.type',None)
    if con_type == None:
        return {}
    if con_type == 'Pullback' or con_type == 'Filter':
        morphisms  = self[category]['_category_construct_local.components']
        morph_values = [cifdata[a] for a in morphisms] # the mapped values for each cat
        cats = [self[a]['_name.category_id'] for a in morphisms]
        cat_keys = [self[a]['_category.key_id'] for a in cats]
        cat_values = [list(cifdata[a]) for a in cat_keys] #the category key values for each cat
        if con_type == 'Filter':
            int_filter = self[category].get('_category_construct_local.integer_filter',None)
            text_filter = self[category].get('_category_construct_local.text_filter',None)
            if int_filter is not None:
                morph_values.append([int(a) for a in int_filter])
            if text_filter is not None:
                morph_values.append(text_filter)
            cat_values.append(range(len(morph_values[-1])))
        # create the mathematical product filtered by equality of dataname values
        pullback_ids = [(x,y) for x in cat_values[0] for y in cat_values[1] \
                        if morph_values[0][cat_values[0].index(x)]==morph_values[1][cat_values[1].index(y)]]
        # now prepare for return
        if len(pullback_ids)==0:
            return {}
        newids = self[category]['_category_construct_local.new_ids']
        fullnewids = [self.cat_obj_lookup_table[(category,n)][0] for n in newids]
        if con_type == 'Pullback':
            final_results = {fullnewids[0]:[x[0] for x in pullback_ids],fullnewids[1]:[x[1] for x in pullback_ids]}
            final_results.update(self.duplicate_datanames(cifdata,cats[0],category,key_vals = final_results[fullnewids[0]],skip_names=newids))
            final_results.update(self.duplicate_datanames(cifdata,cats[1],category,key_vals = final_results[fullnewids[1]],skip_names=newids))
        elif con_type == 'Filter':   #simple filter
            final_results = {fullnewids[0]:[x[0] for x in pullback_ids]}
            final_results.update(self.duplicate_datanames(cifdata,cats[0],category,key_vals = final_results[fullnewids[0]],skip_names=newids))
        if store_value:
            self.store_new_cat_values(cifdata,final_results,category)
        return final_results

Going the other way. If we have the pulled-back category, we can populate the pulled-from categories with their identifier items using projections from the pulled-back category. In the special case that we have a pullback that uses a filter function with a single element, we can automatically populate the whole commutative square. We also by default populate identically-named datanames.

The projection datanames are given in _category_construct_local.new_ids, and they always map to the key of the projected-to category.

<Insert category items from pullback information>= (<-U)
def push_from_pullback(self,target_category,source_categories,cifdata,store_value=True):
    """Each of the categories in source_categories are pullbacks that include
    the target_category"""
    target_key = self[target_category]['_category.key_id']
    result = {target_key:[]}
    first_time = True
    # for each source category, determine which element goes to the target
    for sc in source_categories:
        components = self[sc]['_category_construct_local.components']
        comp_cats = [self[c]['_name.category_id'] for c in components]
        new_ids = self[sc]['_category_construct_local.new_ids']
        source_ids = [self.cat_obj_lookup_table[(sc,n)][0] for n in new_ids]
        if len(components) == 2:  # not a filter
            element_pos = comp_cats.index(target_category)
            old_id = source_ids[element_pos]
            print('Using %s to populate %s' % (old_id,target_key))
            result[target_key].extend(cifdata[old_id])
            # project through all identical names
            extra_result = self.duplicate_datanames(cifdata,sc,target_category,skip_names=new_ids+[target_key])
            # we only include keys that are common to all categories
            if first_time:
                result.update(extra_result)
            else:
                for k in extra_result.keys():
                    if k in result:
                        print('Updating %s: was %s' % (k,repr(result[k])))
                        result[k].extend(extra_result[k])
        else:
            extra_result = self.duplicate_datanames(cifdata,sc,target_category,skip_names=new_ids)
            if len(extra_result)>0 or source_ids[0] in cifdata:  #something is present
                result[target_key].extend(cifdata[source_ids[0]])
                for k in extra_result.keys():
                    if k in result:
                        print('Reverse filter: Updating %s: was %s' % (k,repr(result[k])))
                        result[k].extend(extra_result[k])
                    else:
                        result[k]=extra_result[k]
# Bonus derivation if there is a singleton filter
                if self[sc]['_category_construct_local.type'] == 'Filter':
                    int_filter = self[sc].get('_category_construct_local.integer_filter',None)
                    text_filter = self[sc].get('_category_construct_local.text_filter',None)
                    if int_filter is not None:
                        filter_values = int_filter
                    else:
                        filter_values = text_filter
                    if len(filter_values)==1:    #a singleton
                        extra_dataname = self[sc]['_category_construct_local.components'][0]
                        if int_filter is not None:
                            new_value = [int(filter_values[0])] * len(cifdata[source_ids[0]])
                        else:
                            new_value = filter_values * len(cifdata[source_ids[0]])
                        if extra_dataname not in result:
                            result[extra_dataname] = new_value
                        else:
                            result[extra_dataname].extend(new_value)
                else:
                    raise ValueError('Unexpected category construct type' + self[sc]['_category_construct_local.type'])
        first_time = False
    # check for sanity - all dataname lengths must be identical
    datalen = len(set([len(a) for a in result.values()]))
    if datalen != 1:
        raise AssertionError('Failed to construct equal-length category items,'+ repr(result))
    if store_value:
        print('Now storing ' + repr(result))
        self.store_new_cat_values(cifdata,result,target_category)
    return result

def duplicate_datanames(self,cifdata,from_category,to_category,key_vals=None,skip_names=[]):
    """Copy across datanames for which the from_category key equals [[key_vals]]"""
    result = {}
    s_names_in_cat = set(self.names_in_cat(from_category,names_only=True))
    t_names_in_cat = set(self.names_in_cat(to_category,names_only=True))
    can_project = s_names_in_cat & t_names_in_cat
    can_project -= set(skip_names)  #already dealt with
    source_key = self[from_category]['_category.key_id']
    print('Source dataname set: ' + repr(s_names_in_cat))
    print('Target dataname set: ' + repr(t_names_in_cat))
    print('Projecting through following datanames from %s to %s' % (from_category,to_category) + repr(can_project))
    for project_name in can_project:
        full_from_name = self.cat_obj_lookup_table[(from_category.lower(),project_name.lower())][0]
        full_to_name = self.cat_obj_lookup_table[(to_category.lower(),project_name.lower())][0]
        if key_vals is None:
            try:
                result[full_to_name] = cifdata[full_from_name]
            except StarFile.StarDerivationError:
                pass
        else:
            all_key_vals = cifdata[source_key]
            filter_pos = [all_key_vals.index(a) for a in key_vals]
            try:
                all_data_vals = cifdata[full_from_name]
            except StarFile.StarDerivationError:
                pass
            result[full_to_name] = [all_data_vals[i] for i in filter_pos]
    return result

Aliases. If we have this item under a different name, find it and return it immediately after putting it into the correct type. We could be passed either the dictionary defined dataname, or any of its previous names. We have stored our aliases as a table indexed by dictionary-defined dataname in order to potentially translate from old to new datanames. Once we find a dataname that is present in the datafile, we return it. Note that we have two types of check: in one we are given an old-style dataname, and have to find the new or other old version (in which case we have to check the key of the table) and in the other check we are given the latest version of the dataname and have to check for older names in the datafile - this latter is the dREL situation so we have optimised for it be checking that first and making the modern datanames the table keys. Note that this section of code occurs first in the 'derive_item' routine and will change the value of 'key' to the dictionary value even if nothing is available in the datafile, thereby enabling the other derivation routes possible.

<Resolve using aliases>= (<-U)
# check for aliases
# check for an older form of a new value
found_it = [k for k in self.alias_table.get(key,[]) if k in cifdata]
if len(found_it)>0:
    corrected_type = self.change_type(key,cifdata[found_it[0]])
    return corrected_type
# now do the reverse check - any alternative form
alias_name = [a for a in self.alias_table.items() if key in a[1]]
print('Aliases for %s: %s' % (key,repr(alias_name)))
if len(alias_name)==1:
    key = alias_name[0][0]   #actual definition name
    if key in cifdata: return self.change_type(key,cifdata[key])
    found_it = [k for k in alias_name[0][1] if k in cifdata]
    if len(found_it)>0:
        return self.change_type(key,cifdata[found_it[0]])
elif len(alias_name)>1:
    raise CifError('Dictionary error: dataname alias appears in different definitions: ' + repr(alias_name))

Using the defaults system. We also check out any default values which we could return in case of error. Note that ddlm adds the '_enumerations.def_index_id' as an alternative way to derive a value from a table. During development, we deliberately allow errors arising from the method to be propagated so that we can see anything that might be wrong.

If we are using default values, we need to fill in the whole column of a looped category. This is taken care of at the end of the derivation function, so we simply set a flag to say that this is necessary.

<Work out default value of dataname>= (<-U)
if def_val:
    result = self.change_type(key,def_val)
    default_result = True
elif def_index_val:            #derive a default value
    index_vals = self[key]["_enumeration_default.index"]
    val_to_index = cifdata[def_index_val]     #what we are keying on
    if self[def_index_val]['_type.contents'] in ['Code','Name','Tag']:
        lcase_comp = True
        index_vals = [a.lower() for a in index_vals]
    # Handle loops
    if isinstance(val_to_index,list):
        if lcase_comp:
            val_to_index = [a.lower() for a in val_to_index]
        keypos = [index_vals.index(a) for a in val_to_index]
        result = [self[key]["_enumeration_default.value"][a]  for a in keypos]
    else:
        if lcase_comp:
            val_to_index = val_to_index.lower()
        keypos = index_vals.index(val_to_index)   #value error if no such value available
        result = self[key]["_enumeration_default.value"][keypos]
        default_result = True   #flag that it must be extended
    result = self.change_type(key,result)
    print("Indexed on %s to get %s for %s" % (def_index_val,repr(result),repr(val_to_index)))

If a key is missing, we may sometimes fill in default values for it, for example, a missing atom type may be assumed to have a number in cell of 0.

<Generating default packets>= (<-U)
def generate_default_packet(self,catname,catkey,keyvalue):
    """Return a StarPacket with items from ``catname`` and a key value
    of ``keyvalue``"""
    newpack = StarPacket()
    for na in self.names_in_cat(catname):
        def_val = self[na].get("_enumeration.default","")
        if def_val:
            final_val = self.change_type(na,def_val)
            newpack.extend(final_val)
            setattr(newpack,na,final_val)
    if len(newpack)>0:
        newpack.extend(keyvalue)
        setattr(newpack,catkey,keyvalue)
    return newpack

In the single case of executing dREL methods, we wish to return numpy Arrays from our __getitem__ so that the mathematical operations proceed as expected for matrix etc. objects. This needs to be reimplimented: currently numpy must be installed for 'numerification' to work.

<Switch on numpy arrays>= (<-U)
def switch_numpy(self,to_val):
    pass

This function converts the string-valued items returned from the parser into types that correspond to the dictionary specifications. For DDLm it must also deal with potentially complex structures containing both strings and numbers. We have tried to avoid introducing a dependence on Numpy in general for PyCIFRW, but once we get into the realm of DDLm we require Numpy arrays in order to handle the various processing tasks. This routine is the one that will create the arrays from the StarList types, so needs access to numpy. However, this routine is only called if a DDLm dictionary has been provided, so we should still have no Numpy dependence for non DDLm cases

For safety, we check that our object is really string-valued. In practice, this means that it is either a string, a list of strings, or a list of StarLists as these are the only datastructures that an as-parsed file will contain.

<Convert string to appropriate type>= (<-U)
def change_type(self,itemname,inval):
    if inval == "?": return inval
    change_function = convert_type(self[itemname])
    if isinstance(inval,list) and not isinstance(inval,StarFile.StarList):   #from a loop
        newval = list([change_function(a) for a in inval])
    else:
        newval = change_function(inval)
    return newval

We may be passed float values which have esds appended. We catch this case by searching for an opening round bracket

<Convert value to float, ignore esd>= (U->)
def float_with_esd(inval):
    if isinstance(inval,unicode):
        j = inval.find("(")
        if j>=0:  return float(inval[:j])
    return float(inval)



This function analyses a DDL1-type range expression, returning a maximum and minimum value. If the number format were ever to change, we need to change this right here, right now.

<Analyse range>= (<-U)
def getmaxmin(self,rangeexp):
    regexp = '(-?(([0-9]*[.]([0-9]+))|([0-9]+)[.]?)([eEdD][+-]?[0-9]+)?)*'
    regexp = regexp + ":" + regexp
    regexp = re.match(regexp,rangeexp)
    try:
        minimum = regexp.group(1)
        maximum = regexp.group(7)
    except AttributeError:
        print("Can't match %s" % rangeexp)
    if minimum == None: minimum = "."
    else: minimum = float(minimum)
    if maximum == None: maximum = "."
    else: maximum = float(maximum)
    return maximum,minimum

Outputting dictionaries

We would like dictionary blocks to be output in a readable order, that is, parent categories before their child definitions. The base BlockCollection output routines have no knowledge of save frame interrelations, so we have to override the output block order returned by the get_child_list routine.

<Dictionary output routines>= (<-U)
def WriteOut(self,**kwargs):
    myblockorder = self.get_full_child_list()
    self.set_grammar(self.grammar)
    self.standard = 'Dic'
    return super(CifDic,self).WriteOut(blockorder = myblockorder,**kwargs)

def get_full_child_list(self):
    """Return a list of definition blocks in order parent-child-child-child-parent-child..."""
    top_block = self.get_roots()[0][0]
    root_cat = [a for a in self.keys() if self[a].get('_definition.class','Datum')=='Head']
    if len(root_cat) == 1:
        all_names = [top_block] + self.recurse_child_list(root_cat[0])
        unrooted = self.ddlm_danglers()
        double_names =  set(unrooted).intersection(set(all_names))
        if len(double_names)>0:
            raise CifError('Names are children of internal and external categories:%s' % repr(double_names))
        remaining = unrooted[:]
        for no_root in unrooted:
            if self[no_root].get('_definition.scope','Item')=='Category':
                all_names += [no_root]
                remaining.remove(no_root)
                these_children = [n for n in unrooted if self[n]['_name.category_id'].lower()==no_root.lower()]
                all_names += these_children
                [remaining.remove(n) for n in these_children]
        # now sort by category
        ext_cats = set([self[r].get('_name.category_id',self.cat_from_name(r)).lower() for r in remaining])
        for e in ext_cats:
            cat_items = [r for r in remaining if self[r].get('_name.category_id',self.cat_from_name(r)).lower() == e]
            [remaining.remove(n) for n in cat_items]
            all_names += cat_items
        if len(remaining)>0:
            print('WARNING: following items do not seem to belong to a category??')
            print(repr(remaining))
            all_names += remaining
        print('Final block order: ' + repr(all_names))
        return all_names
    raise ValueError('Dictionary contains no/multiple Head categories, please print as plain CIF instead')

def cat_from_name(self,one_name):
    """Guess the category from the name. This should be used only when this is not important semantic information,
    for example, when printing out"""
    (cat,obj) = one_name.split(".")
    if cat[0] == "_": cat = cat[1:]
    return cat

def recurse_child_list(self,parentname):
    """Recursively expand the logical child list of [[parentname]]"""
    final_list = [parentname]
    child_blocks = [a for a in self.child_table.keys() if self[a].get('_name.category_id','').lower() == parentname.lower()]
    child_blocks.sort()    #we love alphabetical order
    child_items = [a for a in child_blocks if self[a].get('_definition.scope','Item') == 'Item']
    final_list += child_items
    child_cats = [a for a in child_blocks if self[a].get('_definition.scope','Item') == 'Category']
    for child_cat in child_cats:
        final_list += self.recurse_child_list(child_cat)
    return final_list



Valid CIFS

A whole new can of worms is opened up when we require that a CIF is not only syntactically correct, but valid according to the specified dictionary.

A valid CIF is essentially a collection of valid CIF blocks. It may be the case in the future that inter-block relationships need to be checked, so we define a separate ValidCifFile class.

<A valid CIF block>= (<-U)
class ValidCifBlock(CifBlock):
    """A `CifBlock` that is valid with respect to a given CIF dictionary.  Methods
    of `CifBlock` are overridden where necessary to disallow addition of invalid items to the
    `CifBlock`.

    ## Initialisation

    * `dic` is a `CifDic` object to be used for validation.

    """
    <Initialise with dictionary>
    <Run data checks>
    <Check input data>
    <Redefine item adding and removing>
    <Validation report>

The dic argument contains a previously initialised dictionary. We can alternatively provide a list of filenames/CifFiles which are merged according to mergemode. Both cannot be provided.

<Initialise with dictionary>= (<-U)
def __init__(self,dic = None, diclist=[], mergemode = "replace",*args,**kwords):
    CifBlock.__init__(self,*args,**kwords)
    if dic and diclist:
        print("Warning: diclist argument ignored when initialising ValidCifBlock")
    if isinstance(dic,CifDic):
        self.fulldic = dic
    else:
        raise TypeError( "ValidCifBlock passed non-CifDic type in dic argument")
    if len(diclist)==0 and not dic:
        raise ValidCifError( "At least one dictionary must be specified")
    if diclist and not dic:
        self.fulldic = merge_dic(diclist,mergemode)
    if not self.run_data_checks()[0]:
        raise ValidCifError( self.report())

Run all of these data checks. The dictionary validation methods return a list of tuples (validation function name, result) for each item. When checking a full data block, we can make use of the optimisation facilities provided in the CifDic object.

<Run data checks>= (<-U)
def run_data_checks(self,verbose=False):
    self.v_result = {}
    self.fulldic.optimize_on()
    for dataname in self.keys():
        update_value(self.v_result,self.fulldic.run_item_validation(dataname,self[dataname]))
        update_value(self.v_result,self.fulldic.run_global_validation(dataname,self[dataname],self))
    for loop_names in self.loops.values():
        update_value(self.v_result,self.fulldic.run_loop_validation(loop_names))
    # now run block-level checks
    update_value(self.v_result,self.fulldic.run_block_validation(self))
    # return false and list of baddies if anything didn't match
    self.fulldic.optimize_off()
    all_keys = list(self.v_result.keys()) #dictionary will change
    for test_key in all_keys:
        #print("%s: %r" % (test_key, self.v_result[test_key]))
        self.v_result[test_key] = [a for a in self.v_result[test_key] if a[1]["result"]==False]
        if len(self.v_result[test_key]) == 0:
            del self.v_result[test_key]
    isvalid = len(self.v_result)==0
    #if not isvalid:
    #    print("Baddies: {!r}".format(self.v_result))
    return isvalid,self.v_result

Report back. We summarize the contents of v_result. This routine is probably broken.

<Validation report>= (<-U)
def report(self):
   outstr = StringIO()
   outstr.write( "Validation results\n")
   outstr.write( "------------------\n")
   print("%d invalid items found\n" % len(self.v_result))
   for item_name,val_func_list in self.v_result.items():
       outstr.write("%s fails following tests:\n" % item_name)
       for val_func in val_func_list:
           outstr.write("\t%s\n")
   return outstr.getvalue()

It is not a mistake for a data name to be absent from any of the specified dictionaries, so we have to check that we have a match before running any data checks, rather than simply raising an error immediately.

<Check input data>= (<-U)
def single_item_check(self,item_name,item_value):
    #self.match_single_item(item_name)
    if item_name not in self.fulldic:
        result = {item_name:[]}
    else:
        result = self.fulldic.run_item_validation(item_name,item_value)
    baddies = list([a for a in result[item_name] if a[1]["result"]==False])
    # if even one false one is found, this should trigger
    isvalid = (len(baddies) == 0)
    # if not isvalid: print("Failures for {}: {!r}".format(item_name, baddies))
    return isvalid,baddies

def loop_item_check(self,loop_names):
    in_dic_names = list([a for a in loop_names if a in self.fulldic])
    if len(in_dic_names)==0:
        result = {loop_names[0]:[]}
    else:
        result = self.fulldic.run_loop_validation(in_dic_names)
    baddies = list([a for a in result[in_dic_names[0]] if a[1]["result"]==False])
    # if even one false one is found, this should trigger
    isvalid = (len(baddies) == 0)
    # if not isvalid: print("Failures for {}: {!r}".format(loop_names, baddies))
    return isvalid,baddies

def global_item_check(self,item_name,item_value,provisional_items={}):
    if item_name not in self.fulldic:
        result = {item_name:[]}
    else:
        result = self.fulldic.run_global_validation(item_name,
           item_value,self,provisional_items = provisional_items)
    baddies = list([a for a in result[item_name] if a[1]["result"] is False])
    # if even one false one is found, this should trigger
    isvalid = (len(baddies) == 0)
    # if not isvalid: print("Failures for {}: {!r}".format(item_name, baddies))
    return isvalid,baddies

def remove_global_item_check(self,item_name):
    if item_name not in self.fulldic:
        result = {item_name:[]}
    else:
        result = self.fulldic.run_remove_global_validation(item_name,self,False)
    baddies = list([a for a in result[item_name] if a[1]["result"]==False])
    # if even one false one is found, this should trigger
    isvalid = (len(baddies) == 0)
    # if not isvalid: print("Failures for {}: {!r}".format(item_name, baddies))
    return isvalid,baddies

We need to override the base class methods here to prevent addition of an item that would render an object invalid.

<Redefine item adding and removing>= (<-U)
<Add to looped data with validity checks>
<Add straight data>

<Add straight data>= (<-U)
def AddCifItem(self,data):
    if isinstance(data[0],(unicode,str)):   # single item
        valid,problems = self.single_item_check(data[0],data[1])
        self.report_if_invalid(valid,problems,data[0])
        valid,problems = self.global_item_check(data[0],data[1])
        self.report_if_invalid(valid,problems,data[0])
    elif isinstance(data[0],tuple) or isinstance(data[0],list):
        paired_data = list(zip(data[0],data[1]))
        for name,value in paired_data:
            valid,problems = self.single_item_check(name,value)
            self.report_if_invalid(valid,problems,name)
        valid,problems = self.loop_item_check(data[0])
        self.report_if_invalid(valid,problems,data[0])
        prov_dict = {}            # for storing temporary items
        for name,value in paired_data: prov_dict[name]=value
        for name,value in paired_data:
            del prov_dict[name]   # remove temporarily
            valid,problems = self.global_item_check(name,value,prov_dict)
            prov_dict[name] = value  # add back in
            self.report_if_invalid(valid,problems,name)
    else:
        raise ValueError("Programming error: AddCifItem passed non-tuple,non-string item")
    super(ValidCifBlock,self).AddCifItem(data)

def AddItem(self,key,value,**kwargs):
    """Set value of dataname `key` to `value` after checking for conformance with CIF dictionary"""
    valid,problems = self.single_item_check(key,value)
    self.report_if_invalid(valid,problems,key)
    valid,problems = self.global_item_check(key,value)
    self.report_if_invalid(valid,problems,key)
    super(ValidCifBlock,self).AddItem(key,value,**kwargs)

# utility function
def report_if_invalid(self,valid,bad_list,data_name):
    if not valid:
        bad_tests = [a[0] for a in bad_list]
        error_string = ",".join(bad_tests)
        error_string = repr(data_name) + " fails following validity checks: "  + error_string
        raise ValidCifError( error_string)

def __delitem__(self,key):
    # we don't need to run single item checks; we do need to run loop and
    # global checks.
    if key in self:
        try:
            loop_items = self.GetLoop(key)
        except TypeError:
            loop_items = []
        if loop_items:             #need to check loop conformance
            loop_names = [a[0] for a in loop_items if a[0] != key]
            valid,problems = self.loop_item_check(loop_names)
            self.report_if_invalid(valid,problems)
        valid,problems = self.remove_global_item_check(key)
        self.report_if_invalid(valid,problems)
    self.RemoveCifItem(key)

Adding to a loop. We find the loop containing the dataname that we have been passed, and then append all of the (key,values) pairs that we are passed in data, which is a dictionary. We expect that the data have been sorted out for us, unlike when data are passed in AddCifItem, when there can be both unlooped and looped data in one set. The dataname passed to this routine is simply a convenient way to refer to the loop, and has no other significance.

<Add to looped data with validity checks>= (<-U)
def AddToLoop(self,dataname,loopdata):
    # single item checks
    paired_data = loopdata.items()
    for name,value in paired_data:
        valid,problems = self.single_item_check(name,value)
        self.report_if_invalid(valid,problems)
    # loop item checks; merge with current loop
    found = 0
    for aloop in self.block["loops"]:
        if dataname in aloop:
            loopnames = aloop.keys()
            for new_name in loopdata.keys():
                if new_name not in loopnames: loopnames.append(new_name)
            valid,problems = self.looped_item_check(loopnames)
            self.report_if_invalid(valid,problems)
    prov_dict = loopdata.copy()
    for name,value in paired_data:
        del prov_dict[name]   # remove temporarily
        valid,problems = self.global_item_check(name,value,prov_dict)
        prov_dict[name] = value  # add back in
        self.report_if_invalid(valid,problems)
    CifBlock.AddToLoop(self,dataname,loopdata)

Note that a dictionary must be specified in order to create a valid Cif file. This dictionary is then passed to any blocks. If they were already ValidCifBlocks, they will be reinitialised. Note that, as reading a dictionary takes time, we do it immediately to save doing it later.

As a convenience, we handle lists of filenames/CifFiles which are supposed to be dictionaries, and pass them directly to the ValidCifBlock object which will merge as necessary.

Note that we have to set bigdic before calling __init__. The various calls down through the inheritance hierarchy end up calling ValidCifBlock with self.bigdic as one of the arguments. Also, this __init__ procedure could be called from within StarFile.__init__ if given a filename to read from, so we allow that bigdic might already have been set - and check for its existence before setting it again!

<A valid CIF file>= (<-U)
class ValidCifFile(CifFile):
    """A CIF file for which all datablocks are valid.  Argument `dic` to
    initialisation specifies a `CifDic` object to use for validation."""
    <Initialise valid CIF>
    <Redefine add new block>

<Initialise valid CIF>= (<-U)
def __init__(self,dic=None,diclist=[],mergemode="replace",*args,**kwargs):
    print("WARNING: ValidCifFile will be removed in the next release.")
    if not diclist and not dic and not hasattr(self,'bigdic'):
        raise ValidCifError( "At least one dictionary is required to create a ValidCifFile object")
    if not dic and diclist:     #merge here for speed
        self.bigdic = merge_dic(diclist,mergemode)
    elif dic and not diclist:
        self.bigdic = dic
    CifFile.__init__(self,*args,**kwargs)
    for blockname in self.keys():
        self.dictionary[blockname]=ValidCifBlock(data=self.dictionary[blockname],dic=self.bigdic)

Whenever a new block is added, we have to additionally update our match array and perform a validation run. This definition shadows the definition in the parent class.

<Redefine add new block>= (<-U)
def NewBlock(self,blockname,blockcontents,**kwargs):
    CifFile.NewBlock(self,blockname,blockcontents,**kwargs)
    # dictionary[blockname] is now a CifBlock object.  We
    # turn it into a ValidCifBlock object
    self.dictionary[blockname] = ValidCifBlock(dic=self.bigdic,
                                     data=self.dictionary[blockname])

We provide some functions for straight validation. These serve as an example of the use of the CifDic class with the CifFile class.

<Top-level functions>= (<-U)
<ValidationResult class>
<Validate against the given dictionaries>
<Run dictionary validation checks>

A convenient wrapper class for dealing with the structure returned by validation. Perhaps a more elegant approach would be to return one of these objects from validation rather than wrap the validation routines inside.

<ValidationResult class>= (<-U)
class ValidationResult:
    """Represents validation result. It is initialised with """
    def __init__(self,results):
        """results is return value of validate function"""
        self.valid_result, self.no_matches = results

    def report(self,use_html):
        """Return string with human-readable description of validation result"""
        return validate_report((self.valid_result, self.no_matches),use_html)

    def is_valid(self,block_name=None):
        """Return True for valid CIF file, otherwise False"""
        if block_name is not None:
            block_names = [block_name]
        else:
            block_names = self.valid_result.iterkeys()
        for block_name in block_names:
            if not self.valid_result[block_name] == (True,{}):
                valid = False
                break
            else:
                valid = True
        return valid

    def has_no_match_items(self,block_name=None):
        """Return true if some items are not found in dictionary"""
        if block_name is not None:
            block_names = [block_name]
        else:
            block_names = self.no_matches.iter_keys()
        for block_name in block_names:
            if self.no_matches[block_name]:
                has_no_match_items = True
                break
            else:
                has_no_match_items = False
        return has_no_match_items



We provide a function to do straight validation, using the built-in methods of the dictionary type. We need to create a single dictionary from the multiple dictionaries we are passed, before doing our check. Also, we allow validation of dictionaries themselves, by passing a special flag isdic. This should only be used for DDL2/DDLm dictionaries, and simply makes save frames visible as ordinary blocks. DDL1 dictionaries validate OK if (any) global block is deleted.

<Validate against the given dictionaries>= (<-U)
def Validate(ciffile,dic = "", diclist=[],mergemode="replace",isdic=False):
    """Validate the `ciffile` conforms to the definitions in `CifDic` object `dic`, or if `dic` is missing,
    to the results of merging the `CifDic` objects in `diclist` according to `mergemode`.  Flag
    `isdic` indicates that `ciffile` is a CIF dictionary meaning that save frames should be
    accessed for validation and that mandatory_category should be interpreted differently for DDL2."""
    if not isinstance(ciffile,CifFile):
        check_file = CifFile(ciffile)
    else:
        check_file = ciffile
    if not dic:
        fulldic = merge_dic(diclist,mergemode)
    else:
        fulldic = dic
    no_matches = {}
    valid_result = {}
    if isdic:          #assume one block only
        check_file.scoping = 'instance' #only data blocks visible
        top_level = check_file.keys()[0]
        check_file.scoping = 'dictionary'   #all blocks visible
        # collect a list of parents for speed
        if fulldic.diclang == 'DDL2':
            poss_parents = fulldic.get_all("_item_linked.parent_name")
            for parent in poss_parents:
                curr_parent = listify(check_file.get(parent,[]))
                new_vals = check_file.get_all(parent)
                new_vals.extend(curr_parent)
                if len(new_vals)>0:
                    check_file[parent] = new_vals
                print("Added %s (len %d)" % (parent,len(check_file[parent])))
    # now run the validations
    for block in check_file.keys():
        if isdic and block == top_level:
           block_scope = 'Dictionary'
        elif isdic:
           block_scope = 'Item'
        else:
           block_scope = 'Datablock'
        no_matches[block] = [a for a in check_file[block].keys() if a not in fulldic]
        # remove non-matching items
        print("Not matched: " + repr(no_matches[block]))
        for nogood in no_matches[block]:
             del check_file[block][nogood]
        print("Validating block %s, scope %s" % (block,block_scope))
        valid_result[block] = run_data_checks(check_file[block],fulldic,block_scope=block_scope)
    return valid_result,no_matches

def validate_report(val_result,use_html=False):
    valid_result,no_matches = val_result
    outstr = StringIO()
    if use_html:
        outstr.write("<h2>Validation results</h2>")
    else:
        outstr.write( "Validation results\n")
        outstr.write( "------------------\n")
    if len(valid_result) > 10:
        suppress_valid = True         #don't clutter with valid messages
        if use_html:
           outstr.write("<p>For brevity, valid blocks are not reported in the output.</p>")
    else:
        suppress_valid = False
    for block in valid_result.keys():
        block_result = valid_result[block]
        if block_result[0]:
            out_line = "Block '%s' is VALID" % block
        else:
            out_line = "Block '%s' is INVALID" % block
        if use_html:
            if (block_result[0] and (not suppress_valid or len(no_matches[block])>0)) or not block_result[0]:
                outstr.write( "<h3>%s</h3><p>" % out_line)
        else:
                outstr.write( "\n %s\n" % out_line)
        if len(no_matches[block])!= 0:
            if use_html:
                outstr.write( "<p>The following items were not found in the dictionary")
                outstr.write(" (note that this does not invalidate the data block):</p>")
                outstr.write("<p><table>\n")
                [outstr.write("<tr><td>%s</td></tr>" % it) for it in no_matches[block]]
                outstr.write("</table>\n")
            else:
                outstr.write( "\n The following items were not found in the dictionary:\n")
                outstr.write("Note that this does not invalidate the data block\n")
                [outstr.write("%s\n" % it) for it in no_matches[block]]
        # now organise our results by type of error, not data item...
        error_type_dic = {}
        for error_item, error_list in block_result[1].items():
            for func_name,bad_result in error_list:
                bad_result.update({"item_name":error_item})
                try:
                    error_type_dic[func_name].append(bad_result)
                except KeyError:
                    error_type_dic[func_name] = [bad_result]
        # make a table of test name, test message
        info_table = {\
        'validate_item_type':\
            "The following data items had badly formed values",
        'validate_item_esd':\
            "The following data items should not have esds appended",
        'validate_enum_range':\
            "The following data items have values outside permitted range",
        'validate_item_enum':\
            "The following data items have values outside permitted set",
        'validate_looping':\
            "The following data items violate looping constraints",
        'validate_loop_membership':\
            "The following looped data names are of different categories to the first looped data name",
        'validate_loop_key':\
            "A required dataname for this category is missing from the loop\n containing the dataname",
        'validate_loop_key_ddlm':\
            "A loop key is missing for the category containing the dataname",
        'validate_loop_references':\
            "A dataname required by the item is missing from the loop",
        'validate_parent':\
            "A parent dataname is missing or contains different values",
        'validate_child':\
            "A child dataname contains different values to the parent",
        'validate_uniqueness':\
            "One or more data items do not take unique values",
        'validate_dependents':\
            "A dataname required by the item is missing from the data block",
        'validate_exclusion': \
            "Both dataname and exclusive alternates or aliases are present in data block",
        'validate_mandatory_category':\
            "A required category is missing from this block",
        'check_mandatory_items':\
            "A required data attribute is missing from this block",
        'check_prohibited_items':\
            "A prohibited data attribute is present in this block"}

        for test_name,test_results in error_type_dic.items():
           if use_html:
               outstr.write(html_error_report(test_name,info_table[test_name],test_results))
           else:
               outstr.write(error_report(test_name,info_table[test_name],test_results))
               outstr.write("\n\n")
    return outstr.getvalue()

# A function to lay out a single error report.  We are passed
# the name of the error (one of our validation functions), the
# explanation to print out, and a dictionary with the error
# information.  We print no more than 50 characters of the item

def error_report(error_name,error_explanation,error_dics):
   retstring = "\n\n " + error_explanation + ":\n\n"
   headstring = "%-32s" % "Item name"
   bodystring = ""
   if "bad_values" in error_dics[0]:
      headstring += "%-20s" % "Bad value(s)"
   if "bad_items" in error_dics[0]:
      headstring += "%-20s" % "Bad dataname(s)"
   if "child" in error_dics[0]:
      headstring += "%-20s" % "Child"
   if "parent" in error_dics[0]:
      headstring += "%-20s" % "Parent"
   headstring +="\n"
   for error in error_dics:
      bodystring += "\n%-32s" % error["item_name"]
      if "bad_values" in error:
          out_vals = [repr(a)[:50] for a in error["bad_values"]]
          bodystring += "%-20s" % out_vals
      if "bad_items" in error:
          bodystring += "%-20s" % repr(error["bad_items"])
      if "child" in error:
          bodystring += "%-20s" % repr(error["child"])
      if "parent" in error:
          bodystring += "%-20s" % repr(error["parent"])
   return retstring + headstring + bodystring

#  This lays out an HTML error report

def html_error_report(error_name,error_explanation,error_dics,annotate=[]):
   retstring = "<h4>" + error_explanation + ":</h4>"
   retstring = retstring + "<table cellpadding=5><tr>"
   headstring = "<th>Item name</th>"
   bodystring = ""
   if "bad_values" in error_dics[0]:
      headstring += "<th>Bad value(s)</th>"
   if "bad_items" in error_dics[0]:
      headstring += "<th>Bad dataname(s)</th>"
   if "child" in error_dics[0]:
      headstring += "<th>Child</th>"
   if "parent" in error_dics[0]:
      headstring += "<th>Parent</th>"
   headstring +="</tr>\n"
   for error in error_dics:
      bodystring += "<tr><td><tt>%s</tt></td>" % error["item_name"]
      if "bad_values" in error:
          bodystring += "<td>%s</td>" % error["bad_values"]
      if "bad_items" in error:
          bodystring += "<td><tt>%s</tt></td>" % error["bad_items"]
      if "child" in error:
          bodystring += "<td><tt>%s</tt></td>" % error["child"]
      if "parent" in error:
          bodystring += "<td><tt>%s</tt></td>" % error["parent"]
      bodystring += "</tr>\n"
   return retstring + headstring + bodystring + "</table>\n"

This function executes validation checks provided in the CifDic. The validation calls create a dictionary containing the test results for each item name. Each item has a list of (test name,result) tuples. After running the tests, we contract these lists to contain only false results, and then remove all items containing no false results.

<Run dictionary validation checks>= (<-U)
def run_data_checks(check_block,fulldic,block_scope='Item'):
    v_result = {}
    for key in check_block.keys():
        update_value(v_result, fulldic.run_item_validation(key,check_block[key]))
        update_value(v_result, fulldic.run_global_validation(key,check_block[key],check_block))
    for loopnames in check_block.loops.values():
        update_value(v_result, fulldic.run_loop_validation(loopnames))
    update_value(v_result,fulldic.run_block_validation(check_block,block_scope=block_scope))
    # return false and list of baddies if anything didn't match
    all_keys = list(v_result.keys())
    for test_key in all_keys:
        v_result[test_key] = [a for a in v_result[test_key] if a[1]["result"]==False]
        if len(v_result[test_key]) == 0:
            del v_result[test_key]
    # if even one false one is found, this should trigger
    # print("Baddies: {!r}".format(v_result))
    isvalid = len(v_result)==0
    return isvalid,v_result

<Utility functions>= (<-U)
<Extract number and esd>
<Convert value to float, ignore esd>
<Conversions to dictionary types>
<Append update>
<Transpose data>
<Merge dictionaries as CIFs>
<Get topmost parent>

This support function uses re capturing to work out the number's value. The re contains 7 groups: group 0 is the entire expression; group 1 is the overall match in the part prior to esd brackets; group 2 is the match with a decimal point, group 3 is the digits after the decimal point, group 4 is the match without a decimal point. Group 5 is the esd bracket contents, and group 6 is the exponent.

The esd should be returned as an independent number. We count the number of digits after the decimal point, create the esd in terms of this, and then, if necessary, apply the exponent.

<Extract number and esd>= (<-U <-U)
def get_number_with_esd(numstring):
    numb_re = '((-?(([0-9]*[.]([0-9]+))|([0-9]+)[.]?))([(][0-9]+[)])?([eEdD][+-]?[0-9]+)?)|(\\?)|(\\.)'
    our_match = re.match(numb_re,numstring)
    if our_match:
        a,base_num,b,c,dad,dbd,esd,exp,q,dot = our_match.groups()
        # print("Debug: {} -> {!r}".format(numstring, our_match.groups()))
    else:
        return None,None
    if dot or q: return None,None     #a dot or question mark
    if exp:          #has exponent
       exp = exp.replace("d","e")     # mop up old fashioned numbers
       exp = exp.replace("D","e")
       base_num = base_num + exp
    # print("Debug: have %s for base_num from %s" % (base_num,numstring))
    base_num = float(base_num)
    # work out esd, if present.
    if esd:
        esd = float(esd[1:-1])    # no brackets
        if dad:                   # decimal point + digits
            esd = esd * (10 ** (-1* len(dad)))
        if exp:
            esd = esd * (10 ** (float(exp[1:])))
    return base_num,esd

For dREl operations we require that all numerical types actually appear as numerical types rather than strings. This function takes a datablock and a dictionary and converts all the datablock contents to numerical values according to the dictionary specifications.

Note that as written we are happy to interpret a floating point string as an integer. We are therefore assuming that the value has been validated.

<Conversions to dictionary types>= (<-U)
<Overall conversion>
<Convert a single value>
<Convert a list value>
<Convert a matrix value>
<Parse the structure specification>

Instead of returning a value, we return a function that can be used to convert the values. This saves time reconstructing the conversion function for every value in a loop.

<Overall conversion>= (<-U)
def convert_type(definition):
    """Convert value to have the type given by definition"""
    #extract the actual required type information
    container = definition['_type.container']
    dimension = definition.get('_type.dimension',StarFile.StarList([]))
    structure = interpret_structure(definition['_type.contents'])
    if container == 'Single':   #a single value to convert
        return convert_single_value(structure)
    elif container == 'List':   #lots of the same value
        return convert_list_values(structure,dimension)
    elif container == 'Multiple': #no idea
        return None
    elif container in ('Array','Matrix'): #numpy array
        return convert_matrix_values(structure)
    return lambda a:a    #unable to convert

<Convert a single value>= (<-U)
def convert_single_value(type_spec):
    """Convert a single item according to type_spec"""
    if type_spec == 'Real':
        return float_with_esd
    if type_spec in ('Count','Integer','Index','Binary','Hexadecimal','Octal'):
        return int
    if type_spec == 'Complex':
        return complex
    if type_spec == 'Imag':
        return lambda a:complex(0,a)
    if type_spec in ('Code','Name','Tag'):  #case-insensitive -> lowercase
        return lambda a:a.lower()
    return lambda a:a   #can't do anything numeric

Convert a whole DDLm list. A 'List' type implies a repetition of the types given in the 'type.contents' entry. We get all fancy and build a function to decode each entry in our input list. This function is then mapped over the List, and in the case of looped List values, it can be mapped over the dataname value as well. However, in the case of a single repetition, files are allowed to drop one level of enclosing brackets. We account for that here by detecting a one-element list and *not* mapping the conversion function. TODO: Note that we do not yet handle the case that we are supposed to convert to a Matrix, rather than a list. TODO: handle arbitrary dimension lists, rather than special-casing the character sequence '[1]'.

<Convert a list value>= (<-U)
class convert_simple_list(object):

    """\
    Callable object that converts values in a simple list according
    to the specified element structure.
    """

    def __init__(self, structure):
        self.converters = [convert_single_value(tp) for tp in structure]
        return

    def __call__(self, element):
        if len(element) != len(self.converters):
            emsg = "Expected iterable of %i values, got %i." % (
                (len(self.converters), len(element)))
            raise ValueError(emsg)
        rv = [f(e) for f, e in zip(self.converters, element)]
        return rv

# End of class convert_single_value

def convert_list_values(structure, dimension):
    """Convert the values according to the element
       structure given in [[structure]]"""
    # simple repetition
    if isinstance(structure, (unicode, str)):
        fcnv = convert_single_value(structure)
    # assume structure is a list of types
    else:
        fcnv = convert_simple_list(structure)
    rv = fcnv
    # setup nested conversion function when dimension differs from 1.
    if len(dimension) > 0 and int(dimension[0]) != 1:
        rv = lambda args : [fcnv(a) for a in args]
    return rv

When storing a matrix/array value as a result of a calculation, we remove the numpy information and instead store as a StarList. The following routine will work transparently for either string or number-valued Star Lists, so we do not have to worry.

<Convert a matrix value>= (<-U)
def convert_matrix_values(valtype):
    """Convert a dREL String or Float valued List structure to a numpy matrix structure"""
    # first convert to numpy array, then let numpy do the work
    try:
        import numpy
    except ImportError:
        return lambda a:a   #cannot do it
    if valtype == 'Real':
        dtype = float
    elif valtype == 'Integer':
        dtype = int
    elif valtype == 'Complex':
        dtype = complex
    else:
        raise ValueError('Unknown matrix value type')
    fcnv = lambda a : numpy.asarray(a, dtype=dtype)
    return fcnv

DDLm specifies List element composition using a notation of form 'cont(el,el,el...)' where 'cont' refers to a container constructor (list or matrix so far) and 'el' is a simple element type. If 'cont' is missing, the sequence of elements is a sequence of elements in a simple list. We have written a simple parser to interpret this.

<Parse the structure specification>= (<-U)
def interpret_structure(struc_spec):
    """Interpret a DDLm structure specification"""
    from . import TypeContentsParser as t
    p = t.TypeParser(t.TypeParserScanner(struc_spec))
    return getattr(p,"input")()

<Append update>= (<-U)
# A utility function to append to item values rather than replace them
def update_value(base_dict,new_items):
    for new_key in new_items.keys():
        if new_key in base_dict:
            base_dict[new_key].extend(new_items[new_key])
        else:
            base_dict[new_key] = new_items[new_key]

<Transpose data>= (<-U)
#Transpose the list of lists passed to us
def transpose(base_list):
    new_lofl = []
    full_length = len(base_list)
    opt_range = range(full_length)
    for i in range(len(base_list[0])):
       new_packet = []
       for j in opt_range:
          new_packet.append(base_list[j][i])
       new_lofl.append(new_packet)
    return new_lofl

# listify strings - used surprisingly often
def listify(item):
    if isinstance(item,(unicode,str)): return [item]
    else: return item

# given a list of search items, return a list of items
# actually contained in the given data block
def filter_present(namelist,datablocknames):
    return [a for a in namelist if a in datablocknames]

# Make an item immutable, used if we want a list to be a key
def make_immutable(values):
    """Turn list of StarList values into a list of immutable items"""
    if not isinstance(values[0],StarList):
        return values
    else:
        return [tuple(a) for a in values]

Decorators. The following decorator keeps track of calls in order to detect recursion. We raise a special recursion error to allow the derive_item method to act accordingly. We also propagate the first-seen value of 'allow_defaults' recursively, so that the original call can control whether or not to use default values. Typically methods can be tried without, and then with, default values, to ensure that all possibilities for deriving the function are attempted first.

<Decorators>= (<-U)
def track_recursion(in_this_func):
    """Keep an eye on a function call to make sure that the key argument hasn't been
    seen before"""
    def wrapper(*args,**kwargs):
        key_arg = args[1]
        if key_arg in wrapper.called_list:
            print('Recursion watch: %s already called %d times' % (key_arg,wrapper.called_list.count(key_arg)))
            raise CifRecursionError( key_arg,wrapper.called_list[:])    #failure
        if len(wrapper.called_list) == 0:   #first time
            wrapper.stored_use_defaults = kwargs.get("allow_defaults",False)
            print('All recursive calls will set allow_defaults to ' + repr(wrapper.stored_use_defaults))
        else:
            kwargs["allow_defaults"] = wrapper.stored_use_defaults
        wrapper.called_list.append(key_arg)
        print('Recursion watch: call stack: ' + repr(wrapper.called_list))
        try:
            result = in_this_func(*args,**kwargs)
        except StarFile.StarDerivationError as s:
            if len(wrapper.called_list) == 1: #no more
                raise StarFile.StarDerivationFailure(wrapper.called_list[0])
            else:
                raise
        finally:
            wrapper.called_list.pop()
            if len(wrapper.called_list) == 0:
                wrapper.stored_used_defaults = 'error'
        return result
    wrapper.called_list = []
    return wrapper

This uses the CifFile merge method to merge a list of filenames, with an initial check to determine DDL1/DDL2 merge style. In one case we merge save frames in a single block, in another case we merge data blocks. These are different levels.

Note that the data block name is passed to specify the parts of each object to be merged, rather than the objects themselves (not doing this was a bug that was caught a while ago).

<Merge dictionaries as CIFs>= (<-U)
# merge ddl dictionaries.  We should be passed filenames or CifFile
# objects
def merge_dic(diclist,mergemode="replace",ddlspec=None):
    dic_as_cif_list = []
    for dic in diclist:
        if not isinstance(dic,CifFile) and \
           not isinstance(dic,(unicode,str)):
               raise TypeError("Require list of CifFile names/objects for dictionary merging")
        if not isinstance(dic,CifFile): dic_as_cif_list.append(CifFile(dic))
        else: dic_as_cif_list.append(dic)
    # we now merge left to right
    basedic = dic_as_cif_list[0]
    if "on_this_dictionary" in basedic:   #DDL1 style only
        for dic in dic_as_cif_list[1:]:
           basedic.merge(dic,mode=mergemode,match_att=["_name"])
    elif len(basedic.keys()) == 1:                     #One block: DDL2/m style
        old_block = basedic[basedic.keys()[0]]
        for dic in dic_as_cif_list[1:]:
           new_block = dic[dic.keys()[0]]
           basedic.merge(dic,mode=mergemode,
                         single_block=[basedic.keys()[0],dic.keys()[0]],
                         match_att=["_item.name"],match_function=find_parent)
    return CifDic(basedic)

Find the main item from a parent-child list. We are asked to find the topmost parent in a ddl2 definition block containing multiple item.names. We use the insight that the parent item will be that item which is not in the list of children as well. If there are no item names, that means that we are dealing with something like a category -can they be merged??

<Get topmost parent>= (<-U)
def find_parent(ddl2_def):
    if "_item.name" not in ddl2_def:
       return None
    if isinstance(ddl2_def["_item.name"],unicode):
        return ddl2_def["_item.name"]
    if "_item_linked.child_name" not in ddl2_def:
        raise CifError("Asked to find parent in block with no child_names")
    if "_item_linked.parent_name" not in ddl2_def:
        raise CifError("Asked to find parent in block with no parent_names")
    result = list([a for a in ddl2_def["_item.name"] if a not in ddl2_def["_item_linked.child_name"]])
    if len(result)>1 or len(result)==0:
        raise CifError("Unable to find single unique parent data item")
    return result[0]

Cif Loop block class

With the removal (by PyCIFRW) of nested loops, this class is now unnecessary. It is now simply a pointer to StarFile.LoopBlock.

<CifLoopBlock class>= (<-U)
class CifLoopBlock(StarFile.LoopBlock):
    def __init__(self,data=(),**kwargs):
        super(CifLoopBlock,self).__init__(data,**kwargs)

<API documentation flags>= (<-U)
#No documentation flags