Modules and Classes
Version: 0.1.3 Last Updated: 03/01/06 20:54:52
View: Paged  |  One Page
Module sqlalchemy.schema

the schema module provides the building blocks for database metadata. This means all the entities within a SQL database that we might want to look at, modify, or create and delete are described by these objects, in a database-agnostic way.

A structure of SchemaItems also provides a "visitor" interface which is the primary method by which other methods operate upon the schema. The SQL package extends this structure with its own clause-specific objects as well as the visitor interface, so that the schema package "plugs in" to the SQL package.

Class Column(ColumnClause)

represents a column in a database table. this is a subclass of sql.ColumnClause and represents an actual existing table in the database, in a similar fashion as TableClause/Table.

def __init__(self, name, type, *args, **kwargs)

constructs a new Column object. Arguments are: name : the name of this column. this should be the identical name as it appears, or will appear, in the database. type : this is the type of column. This can be any subclass of types.TypeEngine, including the database-agnostic types defined in the types module, database-specific types defined within specific database modules, or user-defined types. *args : ForeignKey and Sequence objects should be added as list values. **kwargs : keyword arguments include: key=None : an optional "alias name" for this column. The column will then be identified everywhere in an application, including the column list on its Table, by this key, and not the given name. Generated SQL, however, will still reference the column by its actual name. primary_key=False : True if this column is a primary key column. Multiple columns can have this flag set to specify composite primary keys. nullable=True : True if this column should allow nulls. Defaults to True unless this column is a primary key column. default=None : a scalar, python callable, or ClauseElement representing the "default value" for this column, which will be invoked upon insert if this column is not present in the insert list or is given a value of None. hidden=False : indicates this column should not be listed in the table's list of columns. Used for the "oid" column, which generally isnt in column lists.

index=None : True or index name. Indicates that this column is indexed. Pass true to autogenerate the index name. Pass a string to specify the index name. Multiple columns that specify the same index name will all be included in the index, in the order of their creation.

unique=None : True or undex name. Indicates that this column is indexed in a unique index . Pass true to autogenerate the index name. Pass a string to specify the index name. Multiple columns that specify the same index name will all be included in the index, in the order of their creation.

def accept_schema_visitor(self, visitor)

traverses the given visitor to this Column's default and foreign key object, then calls visit_column on the visitor.

def append_item(self, item)

columns = property()
def copy(self)

creates a copy of this Column, unitialized

engine = property()
original = property()
parent = property()
back to section top
Class ColumnDefault(DefaultGenerator)

A plain default value on a column. this could correspond to a constant, a callable function, or a SQL clause.

def __init__(self, arg)

def accept_schema_visitor(self, visitor)

calls the visit_column_default method on the given visitor.

back to section top
Class ForeignKey(SchemaItem)

defines a ForeignKey constraint between two columns. ForeignKey is specified as an argument to a Column object.

def __init__(self, column)

Constructs a new ForeignKey object. "column" can be a schema.Column object representing the relationship, or just its string name given as "tablename.columnname". schema can be specified as "schemaname.tablename.columnname"

def accept_schema_visitor(self, visitor)

calls the visit_foreign_key method on the given visitor.

column = property()
def copy(self)

produces a copy of this ForeignKey object.

def references(self, table)

returns True if the given table is referenced by this ForeignKey.

back to section top
Class Index(SchemaItem)

Represents an index of columns from a database table

def __init__(self, name, *columns, **kw)

Constructs an index object. Arguments are:

name : the name of the index

*columns : columns to include in the index. All columns must belong to the same table, and no column may appear more than once.

**kw : keyword arguments include:

unique=True : create a unique index

def accept_schema_visitor(self, visitor)

def append_column(self, column)

def create(self)

def drop(self)

engine = property()
def execute(self)

back to section top
Class PassiveDefault(DefaultGenerator)

a default that takes effect on the database side

def __init__(self, arg)

def accept_schema_visitor(self, visitor)

back to section top
Class SchemaEngine(object)

a factory object used to create implementations for schema objects. This object is the ultimate base class for the engine.SQLEngine class.

def __init__(self)

def reflecttable(self, table)

given a table, will query the database and populate its Column and ForeignKey objects.

back to section top
Class SchemaItem(object)

base class for items that define a database schema.

back to section top
Class SchemaVisitor(ClauseVisitor)

defines the visiting for SchemaItem objects

def visit_column(self, column)

visit a Column.

def visit_column_default(self, default)

visit a ColumnDefault.

def visit_foreign_key(self, join)

visit a ForeignKey.

def visit_index(self, index)

visit an Index.

def visit_passive_default(self, default)

visit a passive default

def visit_schema(self, schema)

visit a generic SchemaItem

def visit_sequence(self, sequence)

visit a Sequence.

def visit_table(self, table)

visit a Table.

back to section top
Class Sequence(DefaultGenerator)

represents a sequence, which applies to Oracle and Postgres databases.

def __init__(self, name, start=None, increment=None, optional=False)

def accept_schema_visitor(self, visitor)

calls the visit_seauence method on the given visitor.

back to section top
Class Table(TableClause)

represents a relational database table. This subclasses sql.TableClause to provide a table that is "wired" to an engine. Whereas TableClause represents a table as its used in a SQL expression, Table represents a table as its created in the database. Be sure to look at sqlalchemy.sql.TableImpl for additional methods defined on a Table.

def __init__(self, name, engine, **kwargs)

Table objects can be constructed directly. The init method is actually called via the TableSingleton metaclass. Arguments are: name : the name of this table, exactly as it appears, or will appear, in the database. This property, along with the "schema", indicates the "singleton identity" of this table. Further tables constructed with the same name/schema combination will return the same Table instance. engine : a SchemaEngine instance to provide services to this table. Usually a subclass of sql.SQLEngine. *args : should contain a listing of the Column objects for this table. **kwargs : options include: schema=None : the "schema name" for this table, which is required if the table resides in a schema other than the default selected schema for the engine's database connection. autoload=False : the Columns for this table should be reflected from the database. Usually there will be no Column objects in the constructor if this property is set. redefine=False : if this Table has already been defined in the application, clear out its columns and redefine with new arguments. mustexist=False : indicates that this Table must already have been defined elsewhere in the application, else an exception is raised. useexisting=False : indicates that if this Table was already defined elsewhere in the application, disregard the rest of the constructor arguments. If this flag and the "redefine" flag are not set, constructing the same table twice will result in an exception.

def accept_schema_visitor(self, visitor)

traverses the given visitor across the Column objects inside this Table, then calls the visit_table method on the visitor.

def append_column(self, column)

def append_index(self, index)

def append_index_column(self, column, index=None, unique=None)

Add an index or a column to an existing index of the same name.

def append_item(self, item)

appends a Column item or other schema item to this Table.

def create(self, **params)

def deregister(self)

removes this table from it's engines table registry. this does not issue a SQL DROP statement.

def drop(self, **params)

def reload_values(self, *args)

clears out the columns and other properties of this Table, and reloads them from the given argument list. This is used with the "redefine" keyword argument sent to the metaclass constructor.

def toengine(self, engine, schema=None)

returns a singleton instance of this Table with a different engine

back to section top
Module sqlalchemy.engine

Defines the SQLEngine class, which serves as the primary "database" object used throughout the sql construction and object-relational mapper packages. A SQLEngine is a facade around a single connection pool corresponding to a particular set of connection parameters, and provides thread-local transactional methods and statement execution methods for Connection objects. It also provides a facade around a Cursor object to allow richer column selection for result rows as well as type conversion operations, known as a ResultProxy.

A SQLEngine is provided to an application as a subclass that is specific to a particular type of DBAPI, and is the central switching point for abstracting different kinds of database behavior into a consistent set of behaviors. It provides a variety of factory methods to produce everything specific to a certain kind of database, including a Compiler, schema creation/dropping objects.

The term "database-specific" will be used to describe any object or function that has behavior corresponding to a particular vendor, such as mysql-specific, sqlite-specific, etc.

Module Functions
def create_engine(name, opts=None, **kwargs)

creates a new SQLEngine instance. There are two forms of calling this method. In the first, the "name" argument is the type of engine to load, i.e. 'sqlite', 'postgres', 'oracle', 'mysql'. "opts" is a dictionary of options to be sent to the underlying DBAPI module to create a connection, usually including a hostname, username, password, etc. In the second, the "name" argument is a URL in the form <enginename>://opt1=val1&opt2=val2. Where <enginename> is the name as above, and the contents of the option dictionary are spelled out as a URL encoded string. The "opts" argument is not used. In both cases, **kwargs represents options to be sent to the SQLEngine itself. A possibly partial listing of those options is as follows: pool=None : an instance of sqlalchemy.pool.DBProxy to be used as the underlying source for connections (DBProxy is described in the previous section). If None, a default DBProxy will be created using the engine's own database module with the given arguments. echo=False : if True, the SQLEngine will log all statements as well as a repr() of their parameter lists to the engines logger, which defaults to sys.stdout. A SQLEngine instances' "echo" data member can be modified at any time to turn logging on and off. If set to the string 'debug', result rows will be printed to the standard output as well. logger=None : a file-like object where logging output can be sent, if echo is set to True. This defaults to sys.stdout.

module=None : used by Oracle and Postgres, this is a reference to a DBAPI2 module to be used instead of the engine's default module. For Postgres, the default is psycopg2, or psycopg1 if 2 cannot be found. For Oracle, its cx_Oracle. For mysql, MySQLdb.

use_ansi=True : used only by Oracle; when False, the Oracle driver attempts to support a particular "quirk" of some Oracle databases, that the LEFT OUTER JOIN SQL syntax is not supported, and the "Oracle join" syntax of using <column1>(+)=<column2> must be used in order to achieve a LEFT OUTER JOIN. Its advised that the Oracle database be configured to have full ANSI support instead of using this feature.

def engine_descriptors()

provides a listing of all the database implementations supported. this data is provided as a list of dictionaries, where each dictionary contains the following key/value pairs: name : the name of the engine, suitable for use in the create_engine function

description: a plain description of the engine.

arguments : a dictionary describing the name and description of each parameter used to connect to this engine's underlying DBAPI. This function is meant for usage in automated configuration tools that wish to query the user for database and connection information.

back to section top
Class SQLEngine(SchemaEngine)

The central "database" object used by an application. Subclasses of this object is used by the schema and SQL construction packages to provide database-specific behaviors, as well as an execution and thread-local transaction context. SQLEngines are constructed via the create_engine() function inside this package.

returns an ISchema object for this engine, which allows access to information_schema tables (if supported)

def __init__(self, pool=None, echo=False, logger=None, default_ordering=False, echo_pool=False, echo_uow=False, convert_unicode=False, **params)

constructs a new SQLEngine. SQLEngines should be constructed via the create_engine() function which will construct the appropriate subclass of SQLEngine.

def begin(self)

"begins" a transaction on a pooled connection, and stores the connection in a thread-local context. repeated calls to begin() within the same thread will increment a counter that must be decreased by corresponding commit() statements before an actual commit occurs. this is to provide "nested" behavior of transactions so that different functions can all call begin()/commit() and still call each other.

def commit(self)

commits the current thread-local transaction started by begin(). If begin() was called multiple times, a counter will be decreased for each call to commit(), with the actual commit operation occuring when the counter reaches zero. this is to provide "nested" behavior of transactions so that different functions can all call begin()/commit() and still call each other.

def compile(self, statement, parameters, **kwargs)

given a sql.ClauseElement statement plus optional bind parameters, creates a new instance of this engine's SQLCompiler, compiles the ClauseElement, and returns the newly compiled object.

def compiler(self, statement, parameters)

returns a sql.ClauseVisitor which will produce a string representation of the given ClauseElement and parameter dictionary. This object is usually a subclass of ansisql.ANSICompiler. compiler is called within the context of the compile() method.

def connect_args(self)

subclasses override this method to provide a two-item tuple containing the *args and **kwargs used to establish a connection.

def connection(self)

returns a managed DBAPI connection from this SQLEngine's connection pool.

def create(self, entity, **params)

creates a table or index within this engine's database connection given a schema.Table object.

def dbapi(self)

subclasses override this method to provide the DBAPI module used to establish connections.

def defaultrunner(self, proxy)

Returns a schema.SchemaVisitor instance that can execute the default values on a column. The base class for this visitor is the DefaultRunner class inside this module. This visitor will typically only receive schema.DefaultGenerator schema objects. The given proxy is a callable that takes a string statement and a dictionary of bind parameters to be executed. For engines that require positional arguments, the dictionary should be an instance of OrderedDict which returns its bind parameters in the proper order. defaultrunner is called within the context of the execute_compiled() method.

def dispose(self)

disposes of the underlying pool manager for this SQLEngine.

def do_begin(self, connection)

implementations might want to put logic here for turning autocommit on/off, etc.

def do_commit(self, connection)

implementations might want to put logic here for turning autocommit on/off, etc.

def do_rollback(self, connection)

implementations might want to put logic here for turning autocommit on/off, etc.

def drop(self, entity, **params)

drops a table or index within this engine's database connection given a schema.Table object.

def execute(self, statement, parameters, connection=None, cursor=None, echo=None, typemap=None, commit=False, return_raw=False, **kwargs)

executes the given string-based SQL statement with the given parameters.

The parameters can be a dictionary or a list, or a list of dictionaries or lists, depending on the paramstyle of the DBAPI. If the current thread has specified a transaction begin() for this engine, the statement will be executed in the context of the current transactional connection. Otherwise, a commit() will be performed immediately after execution, since the local pooled connection is returned to the pool after execution without a transaction set up.

In all error cases, a rollback() is immediately performed on the connection before propigating the exception outwards.

Other options include:

connection - a DBAPI connection to use for the execute. If None, a connection is pulled from this engine's connection pool.

echo - enables echo for this execution, which causes all SQL and parameters to be dumped to the engine's logging output before execution.

typemap - a map of column names mapped to sqlalchemy.types.TypeEngine objects. These will be passed to the created ResultProxy to perform post-processing on result-set values.

commit - if True, will automatically commit the statement after completion.

def execute_compiled(self, compiled, parameters, connection=None, cursor=None, echo=None, **kwargs)

executes the given compiled statement object with the given parameters.

The parameters can be a dictionary of key/value pairs, or a list of dictionaries for an executemany() style of execution. Engines that use positional parameters will convert the parameters to a list before execution.

If the current thread has specified a transaction begin() for this engine, the statement will be executed in the context of the current transactional connection. Otherwise, a commit() will be performed immediately after execution, since the local pooled connection is returned to the pool after execution without a transaction set up.

In all error cases, a rollback() is immediately performed on the connection before propigating the exception outwards.

Other options include:

connection - a DBAPI connection to use for the execute. If None, a connection is pulled from this engine's connection pool.

echo - enables echo for this execution, which causes all SQL and parameters to be dumped to the engine's logging output before execution.

typemap - a map of column names mapped to sqlalchemy.types.TypeEngine objects. These will be passed to the created ResultProxy to perform post-processing on result-set values.

commit - if True, will automatically commit the statement after completion.

def get_default_schema_name(self)

returns the currently selected schema in the current connection.

def hash_key(self)

ischema = property()
def last_inserted_ids(self)

returns a thread-local list of the primary key values for the last insert statement executed. This does not apply to straight textual clauses; only to sql.Insert objects compiled against a schema.Table object, which are executed via statement.execute(). The order of items in the list is the same as that of the Table's 'primary_key' attribute. In some cases, this method may invoke a query back to the database to retrieve the data, based on the "lastrowid" value in the cursor.

def lastrow_has_defaults(self)

def log(self, msg)

logs a message using this SQLEngine's logger stream.

def multi_transaction(self, tables, func)

provides a transaction boundary across tables which may be in multiple databases. If you have three tables, and a function that operates upon them, providing the tables as a list and the function will result in a begin()/commit() pair invoked for each distinct engine represented within those tables, and the function executed within the context of that transaction. any exceptions will result in a rollback(). clearly, this approach only goes so far, such as if database A commits, then database B commits and fails, A is already committed. Any failure conditions have to be raised before anyone commits for this to be useful.

name = property()
def oid_column_name(self)

returns the oid column name for this engine, or None if the engine cant/wont support OID/ROWID.

paramstyle = property()
def post_exec(self, proxy, compiled, parameters, **kwargs)

called by execute_compiled after the compiled statement is executed.

def pre_exec(self, proxy, compiled, parameters, **kwargs)

called by execute_compiled before the compiled statement is executed.

def reflecttable(self, table)

given a Table object, reflects its columns and properties from the database.

def rollback(self)

rolls back the current thread-local transaction started by begin(). the "begin" counter is cleared and the transaction ended.

def schemadropper(self, **params)

returns a schema.SchemaVisitor instance that can drop schemas, when it is invoked to traverse a set of schema objects. schemagenerator is called via the drop() method.

def schemagenerator(self, **params)

returns a schema.SchemaVisitor instance that can generate schemas, when it is invoked to traverse a set of schema objects. schemagenerator is called via the create() method.

def supports_sane_rowcount(self)

Provided to indicate when MySQL is being used, which does not have standard behavior for the "rowcount" function on a statement handle.

def text(self, text, *args, **kwargs)

returns a sql.text() object for performing literal queries.

def transaction(self, func)

executes the given function within a transaction boundary. this is a shortcut for explicitly calling begin() and commit() and optionally rollback() when execptions are raised.

def type_descriptor(self, typeobj)

provides a database-specific TypeEngine object, given the generic object which comes from the types module. Subclasses will usually use the adapt_type() method in the types module to make this job easy.

back to section top
Class ResultProxy

wraps a DBAPI cursor object to provide access to row columns based on integer position, case-insensitive column name, or by schema.Column object. e.g.: row = fetchone()

col1 = row[0] # access via integer position

col2 = row['col2'] # access via name

col3 = row[mytable.c.mycol] # access via Column object. ResultProxy also contains a map of TypeEngine objects and will invoke the appropriate convert_result_value() method before returning columns.

def __init__(self, cursor, engine, typemap=None)

ResultProxy objects are constructed via the execute() method on SQLEngine.

def fetchall(self)

fetches all rows, just like DBAPI cursor.fetchall().

def fetchone(self)

fetches one row, just like DBAPI cursor.fetchone().

back to section top
Class RowProxy

proxies a single cursor row for a parent ResultProxy.

def __init__(self, parent, row)

RowProxy objects are constructed by ResultProxy objects.

def items(self)

def keys(self)

def values(self)

back to section top
Module sqlalchemy.sql

defines the base components of SQL expression trees.

Module Functions
def alias(*args, **params)

def and_(*clauses)

joins a list of clauses together by the AND operator. the & operator can be used as well.

def asc(column)

returns an ascending ORDER BY clause element, e.g.: order_by = [asc(table1.mycol)]

def bindparam(key, value=None, type=None)

creates a bind parameter clause with the given key. An optional default value can be specified by the value parameter, and the optional type parameter is a sqlalchemy.types.TypeEngine object which indicates bind-parameter and result-set translation for this bind parameter.

def column(text, table=None, type=None)

returns a textual column clause, relative to a table. this is also the primitive version of a schema.Column which is a subclass.

def delete(table, whereclause=None, **kwargs)

returns a DELETE clause element. This can also be called from a table directly via the table's delete() method. 'table' is the table to be updated. 'whereclause' is a ClauseElement describing the WHERE condition of the UPDATE statement.

def desc(column)

returns a descending ORDER BY clause element, e.g.: order_by = [desc(table1.mycol)]

def exists(*args, **params)

def insert(table, values=None, **kwargs)

returns an INSERT clause element. This can also be called from a table directly via the table's insert() method. 'table' is the table to be inserted into. 'values' is a dictionary which specifies the column specifications of the INSERT, and is optional. If left as None, the column specifications are determined from the bind parameters used during the compile phase of the INSERT statement. If the bind parameters also are None during the compile phase, then the column specifications will be generated from the full list of table columns.

If both 'values' and compile-time bind parameters are present, the compile-time bind parameters override the information specified within 'values' on a per-key basis.

The keys within 'values' can be either Column objects or their string identifiers. Each key may reference one of: a literal data value (i.e. string, number, etc.), a Column object, or a SELECT statement. If a SELECT statement is specified which references this INSERT statement's table, the statement will be correlated against the INSERT statement.

def join(left, right, onclause=None, **kwargs)

returns a JOIN clause element (regular inner join), given the left and right hand expressions, as well as the ON condition's expression. To chain joins together, use the resulting Join object's "join()" or "outerjoin()" methods.

def literal(value, type=None)

returns a literal clause, bound to a bind parameter. literal clauses are created automatically when used as the right-hand side of a boolean or math operation against a column object. use this function when a literal is needed on the left-hand side (and optionally on the right as well). the optional type parameter is a sqlalchemy.types.TypeEngine object which indicates bind-parameter and result-set translation for this literal.

def not_(clause)

returns a negation of the given clause, i.e. NOT(clause). the ~ operator can be used as well.

def or_(*clauses)

joins a list of clauses together by the OR operator. the | operator can be used as well.

def outerjoin(left, right, onclause=None, **kwargs)

returns an OUTER JOIN clause element, given the left and right hand expressions, as well as the ON condition's expression. To chain joins together, use the resulting Join object's "join()" or "outerjoin()" methods.

def select(columns=None, whereclause=None, from_obj=[], **kwargs)

returns a SELECT clause element. this can also be called via the table's select() method. 'columns' is a list of columns and/or selectable items to select columns from 'whereclause' is a text or ClauseElement expression which will form the WHERE clause 'from_obj' is an list of additional "FROM" objects, such as Join objects, which will extend or override the default "from" objects created from the column list and the whereclause. **kwargs - additional parameters for the Select object.

def subquery(alias, *args, **params)

def table(name, *columns)

returns a table clause. this is a primitive version of the schema.Table object, which is a subclass of this object.

def text(text, engine=None, *args, **kwargs)

creates literal text to be inserted into a query. When constructing a query from a select(), update(), insert() or delete(), using plain strings for argument values will usually result in text objects being created automatically. Use this function when creating textual clauses outside of other ClauseElement objects, or optionally wherever plain text is to be used. Arguments include:

text - the text of the SQL statement to be created. use :<param> to specify bind parameters; they will be compiled to their engine-specific format.

engine - an optional engine to be used for this text query. Alternatively, call the text() method off the engine directly.

bindparams - a list of bindparam() instances which can be used to define the types and/or initial values for the bind parameters within the textual statement; the keynames of the bindparams must match those within the text of the statement. The types will be used for pre-processing on bind values.

typemap - a dictionary mapping the names of columns represented in the SELECT clause of the textual statement to type objects, which will be used to perform post-processing on columns within the result set (for textual statements that produce result sets).

def union(*selects, **params)

def union_all(*selects, **params)

def update(table, whereclause=None, values=None, **kwargs)

returns an UPDATE clause element. This can also be called from a table directly via the table's update() method. 'table' is the table to be updated. 'whereclause' is a ClauseElement describing the WHERE condition of the UPDATE statement. 'values' is a dictionary which specifies the SET conditions of the UPDATE, and is optional. If left as None, the SET conditions are determined from the bind parameters used during the compile phase of the UPDATE statement. If the bind parameters also are None during the compile phase, then the SET conditions will be generated from the full list of table columns.

If both 'values' and compile-time bind parameters are present, the compile-time bind parameters override the information specified within 'values' on a per-key basis.

The keys within 'values' can be either Column objects or their string identifiers. Each key may reference one of: a literal data value (i.e. string, number, etc.), a Column object, or a SELECT statement. If a SELECT statement is specified which references this UPDATE statement's table, the statement will be correlated against the UPDATE statement.

back to section top
Class Compiled(ClauseVisitor)

represents a compiled SQL expression. the __str__ method of the Compiled object should produce the actual text of the statement. Compiled objects are specific to the database library that created them, and also may or may not be specific to the columns referenced within a particular set of bind parameters. In no case should the Compiled object be dependent on the actual values of those bind parameters, even though it may reference those values as defaults.

def __init__(self, engine, statement, parameters)

constructs a new Compiled object. engine - SQLEngine to compile against statement - ClauseElement to be compiled parameters - optional dictionary indicating a set of bind parameters specified with this Compiled object. These parameters are the "default" values corresponding to the ClauseElement's BindParamClauses when the Compiled is executed. In the case of an INSERT or UPDATE statement, these parameters will also result in the creation of new BindParamClause objects for each key and will also affect the generated column list in an INSERT statement and the SET clauses of an UPDATE statement. The keys of the parameter dictionary can either be the string names of columns or actual sqlalchemy.schema.Column objects.

def execute(self, *multiparams, **params)

executes this compiled object using the underlying SQLEngine

def get_params(self, **params)

returns the bind params for this compiled object. Will start with the default parameters specified when this Compiled object was first constructed, and will override those values with those sent via **params, which are key/value pairs. Each key should match one of the BindParamClause objects compiled into this object; either the "key" or "shortname" property of the BindParamClause.

def scalar(self, *multiparams, **params)

executes this compiled object via the execute() method, then returns the first column of the first row. Useful for executing functions, sequences, rowcounts, etc.

back to section top
Class ClauseElement(object)

base class for elements of a programmatically constructed SQL expression.

attempts to locate a SQLEngine within this ClauseElement structure, or returns None if none found.

def accept_visitor(self, visitor)

accepts a ClauseVisitor and calls the appropriate visit_xxx method.

def compare(self, other)

compares this ClauseElement to the given ClauseElement. Subclasses should override the default behavior, which is a straight identity comparison.

def compile(self, engine=None, parameters=None, typemap=None)

compiles this SQL expression using its underlying SQLEngine to produce a Compiled object. If no engine can be found, an ansisql engine is used. bindparams is a dictionary representing the default bind parameters to be used with the statement.

def copy_container(self)

should return a copy of this ClauseElement, iff this ClauseElement contains other ClauseElements. Otherwise, it should be left alone to return self. This is used to create copies of expression trees that still reference the same "leaf nodes". The new structure can then be restructured without affecting the original.

engine = property()
def execute(self, *multiparams, **params)

compiles and executes this SQL expression using its underlying SQLEngine. the given **params are used as bind parameters when compiling and executing the expression. the DBAPI cursor object is returned.

def is_selectable(self)

returns True if this ClauseElement is Selectable, i.e. it contains a list of Column objects and can be used as the target of a select statement.

def scalar(self, *multiparams, **params)

executes this SQL expression via the execute() method, then returns the first column of the first row. Useful for executing functions, sequences, rowcounts, etc.

back to section top
Class TableClause(FromClause)

def __init__(self, name, *columns)

def accept_visitor(self, visitor)

def alias(self, name=None)

def append_column(self, c)

c = property()
columns = property()
def count(self, whereclause=None, **params)

def delete(self, whereclause=None)

foreign_keys = property()
indexes = property()
def insert(self, values=None)

def join(self, right, *args, **kwargs)

oid_column = property()
original_columns = property()
def outerjoin(self, right, *args, **kwargs)

primary_key = property()
def select(self, whereclause=None, **params)

def update(self, whereclause=None, values=None)

back to section top
Class ColumnClause(ColumnElement)

represents a textual column clause in a SQL statement. May or may not be bound to an underlying Selectable.

def __init__(self, text, selectable=None, type=None)

def accept_visitor(self, visitor)

back to section top
Module sqlalchemy.pool

provides a connection pool implementation, which optionally manages connections on a thread local basis. Also provides a DBAPI2 transparency layer so that pools can be managed automatically, based on module type and connect arguments, simply by calling regular DBAPI connect() methods.

Module Functions
def clear_managers()

removes all current DBAPI2 managers. all pools and connections are disposed.

def manage(module, **params)

given a DBAPI2 module and pool management parameters, returns a proxy for the module that will automatically pool connections. Options are delivered to an underlying DBProxy object.

Arguments: module : a DBAPI2 database module. Options: echo=False : if set to True, connections being pulled and retrieved from/to the pool will be logged to the standard output, as well as pool sizing information.

use_threadlocal=True : if set to True, repeated calls to connect() within the same application thread will be guaranteed to return the same connection object, if one has already been retrieved from the pool and has not been returned yet. This allows code to retrieve a connection from the pool, and then while still holding on to that connection, to call other functions which also ask the pool for a connection of the same arguments; those functions will act upon the same connection that the calling method is using.

poolclass=QueuePool : the default class used by the pool module to provide pooling. QueuePool uses the Python Queue.Queue class to maintain a list of available connections.

pool_size=5 : used by QueuePool - the size of the pool to be maintained. This is the largest number of connections that will be kept persistently in the pool. Note that the pool begins with no connections; once this number of connections is requested, that number of connections will remain.

max_overflow=10 : the maximum overflow size of the pool. When the number of checked-out connections reaches the size set in pool_size, additional connections will be returned up to this limit. When those additional connections are returned to the pool, they are disconnected and discarded. It follows then that the total number of simultaneous connections the pool will allow is pool_size + max_overflow, and the total number of "sleeping" connections the pool will allow is pool_size. max_overflow can be set to -1 to indicate no overflow limit; no limit will be placed on the total number of concurrent connections.

back to section top
Class DBProxy(object)

proxies a DBAPI2 connect() call to a pooled connection keyed to the specific connect parameters.

def __init__(self, module, poolclass=, **params)

module is a DBAPI2 module poolclass is a Pool class, defaulting to QueuePool. other parameters are sent to the Pool object's constructor.

def close(self)

def connect(self, *args, **params)

connects to a database using this DBProxy's module and the given connect arguments. if the arguments match an existing pool, the connection will be returned from the pool's current thread-local connection instance, or if there is no thread-local connection instance it will be checked out from the set of pooled connections. If the pool has no available connections and allows new connections to be created, a new database connection will be made.

def dispose(self, *args, **params)

disposes the connection pool referenced by the given connect arguments.

def get_pool(self, *args, **params)

back to section top
Class Pool(object)

def __init__(self, echo=False, use_threadlocal=True)

def connect(self)

def do_get(self)

def do_return_conn(self, conn)

def do_return_invalid(self)

def get(self)

def log(self, msg)

def return_conn(self, conn)

def return_invalid(self)

def status(self)

back to section top
Class QueuePool(Pool)

uses Queue.Queue to maintain a fixed-size list of connections.

def __init__(self, creator, pool_size=5, max_overflow=10, **params)

def checkedin(self)

def checkedout(self)

def do_get(self)

def do_return_conn(self, conn)

def do_return_invalid(self)

def overflow(self)

def size(self)

def status(self)

back to section top
Class SingletonThreadPool(Pool)

Maintains one connection per each thread, never moving to another thread. this is used for SQLite and other databases with a similar restriction.

def __init__(self, creator, **params)

def do_get(self)

def do_return_conn(self, conn)

def do_return_invalid(self)

def status(self)

back to section top
Module sqlalchemy.mapping

the mapper package provides object-relational functionality, building upon the schema and sql packages and tying operations to class properties and constructors.

Module Functions
def assign_mapper(class_, *args, **params)

def cascade_mappers(*classes_or_mappers)

given a list of classes and/or mappers, identifies the foreign key relationships between the given mappers or corresponding class mappers, and creates relation() objects representing those relationships, including a backreference. Attempts to find the "secondary" table in a many-to-many relationship as well. The names of the relations will be a lowercase version of the related class. In the case of one-to-many or many-to-many, the name will be "pluralized", which currently is based on the English language (i.e. an 's' or 'es' added to it).

def class_mapper(class_)

given a class, returns the primary Mapper associated with the class.

def clear_mappers()

removes all mappers that have been created thus far. when new mappers are created, they will be assigned to their classes as their primary mapper.

def defer(name, **kwargs)

returns a MapperOption that will convert the column property of the given name into a deferred load. Used with mapper.options()

def deferred(*columns, **kwargs)

def eagerload(name, **kwargs)

returns a MapperOption that will convert the property of the given name into an eager load. Used with mapper.options()

def extension(ext)

returns a MapperOption that will add the given MapperExtension to the mapper returned by mapper.options().

def lazyload(name, **kwargs)

returns a MapperOption that will convert the property of the given name into a lazy load. Used with mapper.options()

def mapper(class_, table=None, *args, **params)

returns a new or already cached Mapper object.

def noload(name, **kwargs)

returns a MapperOption that will convert the property of the given name into a non-load. Used with mapper.options()

def object_mapper(object)

given an object, returns the primary Mapper associated with the object or the object's class.

def relation(*args, **kwargs)

provides a relationship of a primary Mapper to a secondary Mapper, which corresponds to a parent-child or associative table relationship.

def undefer(name, **kwargs)

returns a MapperOption that will convert the column property of the given name into a non-deferred (regular column) load. Used with mapper.options.

back to section top
Class MapperExtension(object)

def __init__(self)

def after_insert(self, mapper, instance)

called after an object instance has been INSERTed

def after_update(self, mapper, instance)

called after an object instnace is UPDATED

def append_result(self, mapper, row, imap, result, instance, isnew, populate_existing=False)

called when an object instance is being appended to a result list. If it returns True, it is assumed that this method handled the appending itself.

mapper - the mapper doing the operation row - the result row from the database imap - a dictionary that is storing the running set of objects collected from the current result set result - an instance of util.HistoryArraySet(), which may be an attribute on an object if this is a related object load (lazy or eager). use result.append_nohistory(value) to append objects to this list. instance - the object instance to be appended to the result isnew - indicates if this is the first time we have seen this object instance in the current result set. if you are selecting from a join, such as an eager load, you might see the same object instance many times in the same result set. populate_existing - usually False, indicates if object instances that were already in the main identity map, i.e. were loaded by a previous select(), get their attributes overwritten

def before_delete(self, mapper, instance)

called before an object instance is DELETEed

def before_insert(self, mapper, instance)

called before an object instance is INSERTed into its table. this is a good place to set up primary key values and such that arent handled otherwise.

def before_update(self, mapper, instance)

called before an object instnace is UPDATED

def create_instance(self, mapper, row, imap, class_)

called when a new object instance is about to be created from a row. the method can choose to create the instance itself, or it can return None to indicate normal object creation should take place. mapper - the mapper doing the operation row - the result row from the database imap - a dictionary that is storing the running set of objects collected from the current result set class_ - the class we are mapping.

back to section top
Module sqlalchemy.mapping.objectstore

maintains all currently loaded objects in memory, using the "identity map" pattern. Also provides a "unit of work" object which tracks changes to objects so that they may be properly persisted within a transactional scope.

Module Functions
def begin()

begins a new UnitOfWork transaction. the next commit will affect only objects that are created, modified, or deleted following the begin statement.

def clear()

removes all current UnitOfWorks and IdentityMaps for this thread and establishes a new one. It is probably a good idea to discard all current mapped object instances, as they are no longer in the Identity Map.

def commit(*obj)

commits the current UnitOfWork transaction. if a transaction was begun via begin(), commits only those objects that were created, modified, or deleted since that begin statement. otherwise commits all objects that have been changed. if individual objects are submitted, then only those objects are committed, and the begin/commit cycle is not affected.

def delete(*obj)

registers the given objects as to be deleted upon the next commit

def get_id_key(ident, class_)

def get_row_key(row, class_, primary_key)

def has_instance(instance)

returns True if the current thread-local IdentityMap contains the given instance

def has_key(key)

returns True if the current thread-local IdentityMap contains the given instance key

def import_instance(instance)

def instance_key(instance)

returns the IdentityMap key for the given instance

def is_dirty(obj)

returns True if the given object is in the current UnitOfWork's new or dirty list, or if its a modified list attribute on an object.

back to section top
Class Session(object)

Maintains a UnitOfWork instance, including transaction state.

def __init__(self, nest_transactions=False, hash_key=None)

Initialize the objectstore with a UnitOfWork registry. If called with no arguments, creates a single UnitOfWork for all operations. nest_transactions - indicates begin/commit statements can be executed in a "nested", defaults to False which indicates "only commit on the outermost begin/commit" hash_key - the hash_key used to identify objects against this session, which defaults to the id of the Session instance.

def begin(self)

begins a new UnitOfWork transaction and returns a tranasaction-holding object. commit() or rollback() should be called on the returned object. commit() on the Session will do nothing while a transaction is pending, and further calls to begin() will return no-op transactional objects.

def clear(self)

def commit(self, *objects)

commits the current UnitOfWork transaction. called with no arguments, this is only used for "implicit" transactions when there was no begin(). if individual objects are submitted, then only those objects are committed, and the begin/commit cycle is not affected.

def delete(self, *obj)

registers the given objects as to be deleted upon the next commit

def import_instance(self, instance)

places the given instance in the current thread's unit of work context, either in the current IdentityMap or marked as "new". Returns either the object or the current corresponding version in the Identity Map.

this method should be used for any object instance that is coming from a serialized storage, from another thread (assuming the regular threaded unit of work model), or any case where the instance was loaded/created corresponding to a different base unitofwork than the current one.

def refresh(self, *obj)

def register_clean(self, obj)

def register_new(self, obj)

back to section top
Class SessionTrans(object)

returned by Session.begin(), denotes a transactionalized UnitOfWork instance. call commit() on this to commit the transaction.

True if this SessionTrans is the 'active' transaction marker, else its a no-op.

returns the parent Session of this SessionTrans object.

returns the parent UnitOfWork corresponding to this transaction.

def __init__(self, parent, uow, isactive)

def begin(self)

calls begin() on the underlying Session object, returning a new no-op SessionTrans object.

def commit(self)

commits the transaction noted by this SessionTrans object.

isactive = property()
parent = property()
def rollback(self)

rolls back the current UnitOfWork transaction, in the case that begin() has been called. The changes logged since the begin() call are discarded.

uow = property()
back to section top
Class UnitOfWork(object)

def __init__(self, identity_map=None)

def commit(self, *objects)

def get(self, class_, *id)

given a class and a list of primary key values in their table-order, locates the mapper for this class and calls get with the given primary key values.

def has_key(self, key)

returns True if the given key is present in this UnitOfWork's identity map.

def is_dirty(self, obj)

def refresh(self, obj)

def register_attribute(self, class_, key, uselist, **kwargs)

def register_callable(self, obj, key, func, uselist, **kwargs)

def register_clean(self, obj)

def register_deleted(self, obj)

def register_dirty(self, obj)

def register_new(self, obj)

def rollback_object(self, obj)

'rolls back' the attributes that have been changed on an object instance.

def unregister_deleted(self, obj)

def update(self, obj)

called to add an object to this UnitOfWork as though it were loaded from the DB, but is actually coming from somewhere else, like a web session or similar.

back to section top
Module sqlalchemy.exceptions

Class ArgumentError

raised for all those conditions where invalid arguments are sent to constructed objects. This error generally corresponds to construction time state errors.

back to section top
Class AssertionError

corresponds to internal state being detected in an invalid state

back to section top
Class CommitError

raised when an invalid condition is detected upon a commit()

back to section top
Class DBAPIError

something weird happened with a particular DBAPI version

back to section top
Class InvalidRequestError

sqlalchemy was asked to do something it cant do, return nonexistent data, etc. This error generally corresponds to runtime state errors.

back to section top
Class SQLAlchemyError

generic error class

back to section top
Class SQLError

raised when the execution of a SQL statement fails. includes accessors for the underlying exception, as well as the SQL and bind parameters

def __init__(self, statement, params, orig)

back to section top
Module sqlalchemy.ext.proxy

Module Functions
def create_engine(name, opts=None, **kwargs)

creates a new SQLEngine instance. There are two forms of calling this method. In the first, the "name" argument is the type of engine to load, i.e. 'sqlite', 'postgres', 'oracle', 'mysql'. "opts" is a dictionary of options to be sent to the underlying DBAPI module to create a connection, usually including a hostname, username, password, etc. In the second, the "name" argument is a URL in the form <enginename>://opt1=val1&opt2=val2. Where <enginename> is the name as above, and the contents of the option dictionary are spelled out as a URL encoded string. The "opts" argument is not used. In both cases, **kwargs represents options to be sent to the SQLEngine itself. A possibly partial listing of those options is as follows: pool=None : an instance of sqlalchemy.pool.DBProxy to be used as the underlying source for connections (DBProxy is described in the previous section). If None, a default DBProxy will be created using the engine's own database module with the given arguments. echo=False : if True, the SQLEngine will log all statements as well as a repr() of their parameter lists to the engines logger, which defaults to sys.stdout. A SQLEngine instances' "echo" data member can be modified at any time to turn logging on and off. If set to the string 'debug', result rows will be printed to the standard output as well. logger=None : a file-like object where logging output can be sent, if echo is set to True. This defaults to sys.stdout.

module=None : used by Oracle and Postgres, this is a reference to a DBAPI2 module to be used instead of the engine's default module. For Postgres, the default is psycopg2, or psycopg1 if 2 cannot be found. For Oracle, its cx_Oracle. For mysql, MySQLdb.

use_ansi=True : used only by Oracle; when False, the Oracle driver attempts to support a particular "quirk" of some Oracle databases, that the LEFT OUTER JOIN SQL syntax is not supported, and the "Oracle join" syntax of using <column1>(+)=<column2> must be used in order to achieve a LEFT OUTER JOIN. Its advised that the Oracle database be configured to have full ANSI support instead of using this feature.

back to section top
Class AutoConnectEngine(BaseProxyEngine)

An SQLEngine proxy that automatically connects when necessary.

def __init__(self, dburi, opts=None, **kwargs)

def get_engine(self)

back to section top
Class BaseProxyEngine(SchemaEngine)

Basis for all proxy engines

engine = property()
def get_engine(self)

def hash_key(self)

def oid_column_name(self)

def reflecttable(self, table)

def set_engine(self, engine)

def type_descriptor(self, typeobj)

Proxy point: return a ProxyTypeEngine

back to section top
Class ProxyEngine(BaseProxyEngine)

SQLEngine proxy. Supports lazy and late initialization by delegating to a real engine (set with connect()), and using proxy classes for TypeEngine.

def __init__(self, **kwargs)

def connect(self, uri, opts=None, **kwargs)

Establish connection to a real engine.

def get_engine(self)

def set_engine(self, engine)

back to section top
Class ProxyType(object)

ProxyType base class; used by ProxyTypeEngine to construct proxying types

def __init__(self, engine, typeobj)

back to section top
Class ProxyTypeEngine(object)

Proxy type engine; creates dynamic proxy type subclass that is instance of actual type, but proxies engine-dependant operations through the proxy engine.

back to section top
Class TypeEngine(object)

def adapt(self, typeobj)

given a class that is a subclass of this TypeEngine's class, produces a new instance of that class with an equivalent state to this TypeEngine. The given class is a database-specific subclass which is obtained via a lookup dictionary, mapped against the class returned by the class_to_adapt() method.

def adapt_args(self)

Returns an instance of this TypeEngine instance's class, adapted according to the constructor arguments of this TypeEngine. Default return value is just this object instance.

def class_to_adapt(self)

returns the class that should be sent to the adapt() method. This class will be used to lookup an approprate database-specific subclass.

def convert_bind_param(self, value, engine)

def convert_result_value(self, value, engine)

def get_col_spec(self)

back to section top
Previous: The Types System