Coverage for /home/martinb/.local/share/virtualenvs/camcops/lib/python3.6/site-packages/sqlalchemy/dialects/mysql/base.py : 29%

Hot-keys on this page
r m x p toggle line displays
j k next/prev highlighted chunk
0 (zero) top of page
1 (one) first highlighted chunk
1# mysql/base.py
2# Copyright (C) 2005-2020 the SQLAlchemy authors and contributors
3# <see AUTHORS file>
4#
5# This module is part of SQLAlchemy and is released under
6# the MIT License: http://www.opensource.org/licenses/mit-license.php
8r"""
10.. dialect:: mysql
11 :name: MySQL
13Supported Versions and Features
14-------------------------------
16SQLAlchemy supports MySQL starting with version 4.1 through modern releases.
17However, no heroic measures are taken to work around major missing
18SQL features - if your server version does not support sub-selects, for
19example, they won't work in SQLAlchemy either.
21See the official MySQL documentation for detailed information about features
22supported in any given server release.
24.. _mysql_connection_timeouts:
26Connection Timeouts and Disconnects
27-----------------------------------
29MySQL features an automatic connection close behavior, for connections that
30have been idle for a fixed period of time, defaulting to eight hours.
31To circumvent having this issue, use
32the :paramref:`_sa.create_engine.pool_recycle` option which ensures that
33a connection will be discarded and replaced with a new one if it has been
34present in the pool for a fixed number of seconds::
36 engine = create_engine('mysql+mysqldb://...', pool_recycle=3600)
38For more comprehensive disconnect detection of pooled connections, including
39accommodation of server restarts and network issues, a pre-ping approach may
40be employed. See :ref:`pool_disconnects` for current approaches.
42.. seealso::
44 :ref:`pool_disconnects` - Background on several techniques for dealing
45 with timed out connections as well as database restarts.
47.. _mysql_storage_engines:
49CREATE TABLE arguments including Storage Engines
50------------------------------------------------
52MySQL's CREATE TABLE syntax includes a wide array of special options,
53including ``ENGINE``, ``CHARSET``, ``MAX_ROWS``, ``ROW_FORMAT``,
54``INSERT_METHOD``, and many more.
55To accommodate the rendering of these arguments, specify the form
56``mysql_argument_name="value"``. For example, to specify a table with
57``ENGINE`` of ``InnoDB``, ``CHARSET`` of ``utf8mb4``, and ``KEY_BLOCK_SIZE``
58of ``1024``::
60 Table('mytable', metadata,
61 Column('data', String(32)),
62 mysql_engine='InnoDB',
63 mysql_charset='utf8mb4',
64 mysql_key_block_size="1024"
65 )
67The MySQL dialect will normally transfer any keyword specified as
68``mysql_keyword_name`` to be rendered as ``KEYWORD_NAME`` in the
69``CREATE TABLE`` statement. A handful of these names will render with a space
70instead of an underscore; to support this, the MySQL dialect has awareness of
71these particular names, which include ``DATA DIRECTORY``
72(e.g. ``mysql_data_directory``), ``CHARACTER SET`` (e.g.
73``mysql_character_set``) and ``INDEX DIRECTORY`` (e.g.
74``mysql_index_directory``).
76The most common argument is ``mysql_engine``, which refers to the storage
77engine for the table. Historically, MySQL server installations would default
78to ``MyISAM`` for this value, although newer versions may be defaulting
79to ``InnoDB``. The ``InnoDB`` engine is typically preferred for its support
80of transactions and foreign keys.
82A :class:`_schema.Table`
83that is created in a MySQL database with a storage engine
84of ``MyISAM`` will be essentially non-transactional, meaning any
85INSERT/UPDATE/DELETE statement referring to this table will be invoked as
86autocommit. It also will have no support for foreign key constraints; while
87the ``CREATE TABLE`` statement accepts foreign key options, when using the
88``MyISAM`` storage engine these arguments are discarded. Reflecting such a
89table will also produce no foreign key constraint information.
91For fully atomic transactions as well as support for foreign key
92constraints, all participating ``CREATE TABLE`` statements must specify a
93transactional engine, which in the vast majority of cases is ``InnoDB``.
95.. seealso::
97 `The InnoDB Storage Engine
98 <http://dev.mysql.com/doc/refman/5.0/en/innodb-storage-engine.html>`_ -
99 on the MySQL website.
101Case Sensitivity and Table Reflection
102-------------------------------------
104MySQL has inconsistent support for case-sensitive identifier
105names, basing support on specific details of the underlying
106operating system. However, it has been observed that no matter
107what case sensitivity behavior is present, the names of tables in
108foreign key declarations are *always* received from the database
109as all-lower case, making it impossible to accurately reflect a
110schema where inter-related tables use mixed-case identifier names.
112Therefore it is strongly advised that table names be declared as
113all lower case both within SQLAlchemy as well as on the MySQL
114database itself, especially if database reflection features are
115to be used.
117.. _mysql_isolation_level:
119Transaction Isolation Level
120---------------------------
122All MySQL dialects support setting of transaction isolation level both via a
123dialect-specific parameter :paramref:`_sa.create_engine.isolation_level`
124accepted
125by :func:`_sa.create_engine`, as well as the
126:paramref:`.Connection.execution_options.isolation_level` argument as passed to
127:meth:`_engine.Connection.execution_options`.
128This feature works by issuing the
129command ``SET SESSION TRANSACTION ISOLATION LEVEL <level>`` for each new
130connection. For the special AUTOCOMMIT isolation level, DBAPI-specific
131techniques are used.
133To set isolation level using :func:`_sa.create_engine`::
135 engine = create_engine(
136 "mysql://scott:tiger@localhost/test",
137 isolation_level="READ UNCOMMITTED"
138 )
140To set using per-connection execution options::
142 connection = engine.connect()
143 connection = connection.execution_options(
144 isolation_level="READ COMMITTED"
145 )
147Valid values for ``isolation_level`` include:
149* ``READ COMMITTED``
150* ``READ UNCOMMITTED``
151* ``REPEATABLE READ``
152* ``SERIALIZABLE``
153* ``AUTOCOMMIT``
155The special ``AUTOCOMMIT`` value makes use of the various "autocommit"
156attributes provided by specific DBAPIs, and is currently supported by
157MySQLdb, MySQL-Client, MySQL-Connector Python, and PyMySQL. Using it,
158the MySQL connection will return true for the value of
159``SELECT @@autocommit;``.
161.. versionadded:: 1.1 - added support for the AUTOCOMMIT isolation level.
163AUTO_INCREMENT Behavior
164-----------------------
166When creating tables, SQLAlchemy will automatically set ``AUTO_INCREMENT`` on
167the first :class:`.Integer` primary key column which is not marked as a
168foreign key::
170 >>> t = Table('mytable', metadata,
171 ... Column('mytable_id', Integer, primary_key=True)
172 ... )
173 >>> t.create()
174 CREATE TABLE mytable (
175 id INTEGER NOT NULL AUTO_INCREMENT,
176 PRIMARY KEY (id)
177 )
179You can disable this behavior by passing ``False`` to the
180:paramref:`_schema.Column.autoincrement` argument of :class:`_schema.Column`.
181This flag
182can also be used to enable auto-increment on a secondary column in a
183multi-column key for some storage engines::
185 Table('mytable', metadata,
186 Column('gid', Integer, primary_key=True, autoincrement=False),
187 Column('id', Integer, primary_key=True)
188 )
190.. _mysql_ss_cursors:
192Server Side Cursors
193-------------------
195Server-side cursor support is available for the MySQLdb and PyMySQL dialects.
196From a MySQL point of view this means that the ``MySQLdb.cursors.SSCursor`` or
197``pymysql.cursors.SSCursor`` class is used when building up the cursor which
198will receive results. The most typical way of invoking this feature is via the
199:paramref:`.Connection.execution_options.stream_results` connection execution
200option. Server side cursors can also be enabled for all SELECT statements
201unconditionally by passing ``server_side_cursors=True`` to
202:func:`_sa.create_engine`.
204.. versionadded:: 1.1.4 - added server-side cursor support.
206.. _mysql_unicode:
208Unicode
209-------
211Charset Selection
212~~~~~~~~~~~~~~~~~
214Most MySQL DBAPIs offer the option to set the client character set for
215a connection. This is typically delivered using the ``charset`` parameter
216in the URL, such as::
218 e = create_engine(
219 "mysql+pymysql://scott:tiger@localhost/test?charset=utf8mb4")
221This charset is the **client character set** for the connection. Some
222MySQL DBAPIs will default this to a value such as ``latin1``, and some
223will make use of the ``default-character-set`` setting in the ``my.cnf``
224file as well. Documentation for the DBAPI in use should be consulted
225for specific behavior.
227The encoding used for Unicode has traditionally been ``'utf8'``. However,
228for MySQL versions 5.5.3 on forward, a new MySQL-specific encoding
229``'utf8mb4'`` has been introduced, and as of MySQL 8.0 a warning is emitted
230by the server if plain ``utf8`` is specified within any server-side
231directives, replaced with ``utf8mb3``. The rationale for this new encoding
232is due to the fact that MySQL's legacy utf-8 encoding only supports
233codepoints up to three bytes instead of four. Therefore,
234when communicating with a MySQL database
235that includes codepoints more than three bytes in size,
236this new charset is preferred, if supported by both the database as well
237as the client DBAPI, as in::
239 e = create_engine(
240 "mysql+pymysql://scott:tiger@localhost/test?charset=utf8mb4")
242All modern DBAPIs should support the ``utf8mb4`` charset.
244In order to use ``utf8mb4`` encoding for a schema that was created with legacy
245``utf8``, changes to the MySQL schema and/or server configuration may be
246required.
248.. seealso::
250 `The utf8mb4 Character Set \
251 <http://dev.mysql.com/doc/refman/5.5/en/charset-unicode-utf8mb4.html>`_ - \
252 in the MySQL documentation
254.. _mysql_binary_introducer:
256Dealing with Binary Data Warnings and Unicode
257~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
259MySQL versions 5.6, 5.7 and later (not MariaDB at the time of this writing) now
260emit a warning when attempting to pass binary data to the database, while a
261character set encoding is also in place, when the binary data itself is not
262valid for that encoding::
264 default.py:509: Warning: (1300, "Invalid utf8mb4 character string:
265 'F9876A'")
266 cursor.execute(statement, parameters)
268This warning is due to the fact that the MySQL client library is attempting to
269interpret the binary string as a unicode object even if a datatype such
270as :class:`.LargeBinary` is in use. To resolve this, the SQL statement requires
271a binary "character set introducer" be present before any non-NULL value
272that renders like this::
274 INSERT INTO table (data) VALUES (_binary %s)
276These character set introducers are provided by the DBAPI driver, assuming the
277use of mysqlclient or PyMySQL (both of which are recommended). Add the query
278string parameter ``binary_prefix=true`` to the URL to repair this warning::
280 # mysqlclient
281 engine = create_engine(
282 "mysql+mysqldb://scott:tiger@localhost/test?charset=utf8mb4&binary_prefix=true")
284 # PyMySQL
285 engine = create_engine(
286 "mysql+pymysql://scott:tiger@localhost/test?charset=utf8mb4&binary_prefix=true")
289The ``binary_prefix`` flag may or may not be supported by other MySQL drivers.
291SQLAlchemy itself cannot render this ``_binary`` prefix reliably, as it does
292not work with the NULL value, which is valid to be sent as a bound parameter.
293As the MySQL driver renders parameters directly into the SQL string, it's the
294most efficient place for this additional keyword to be passed.
296.. seealso::
298 `Character set introducers <https://dev.mysql.com/doc/refman/5.7/en/charset-introducer.html>`_ - on the MySQL website
301ANSI Quoting Style
302------------------
304MySQL features two varieties of identifier "quoting style", one using
305backticks and the other using quotes, e.g. ```some_identifier``` vs.
306``"some_identifier"``. All MySQL dialects detect which version
307is in use by checking the value of ``sql_mode`` when a connection is first
308established with a particular :class:`_engine.Engine`.
309This quoting style comes
310into play when rendering table and column names as well as when reflecting
311existing database structures. The detection is entirely automatic and
312no special configuration is needed to use either quoting style.
314MySQL SQL Extensions
315--------------------
317Many of the MySQL SQL extensions are handled through SQLAlchemy's generic
318function and operator support::
320 table.select(table.c.password==func.md5('plaintext'))
321 table.select(table.c.username.op('regexp')('^[a-d]'))
323And of course any valid MySQL statement can be executed as a string as well.
325Some limited direct support for MySQL extensions to SQL is currently
326available.
328* INSERT..ON DUPLICATE KEY UPDATE: See
329 :ref:`mysql_insert_on_duplicate_key_update`
331* SELECT pragma, use :meth:`_expression.Select.prefix_with` and
332 :meth:`_query.Query.prefix_with`::
334 select(...).prefix_with(['HIGH_PRIORITY', 'SQL_SMALL_RESULT'])
336* UPDATE with LIMIT::
338 update(..., mysql_limit=10)
340* optimizer hints, use :meth:`_expression.Select.prefix_with` and
341 :meth:`_query.Query.prefix_with`::
343 select(...).prefix_with("/*+ NO_RANGE_OPTIMIZATION(t4 PRIMARY) */")
345* index hints, use :meth:`_expression.Select.with_hint` and
346 :meth:`_query.Query.with_hint`::
348 select(...).with_hint(some_table, "USE INDEX xyz")
350.. _mysql_insert_on_duplicate_key_update:
352INSERT...ON DUPLICATE KEY UPDATE (Upsert)
353------------------------------------------
355MySQL allows "upserts" (update or insert)
356of rows into a table via the ``ON DUPLICATE KEY UPDATE`` clause of the
357``INSERT`` statement. A candidate row will only be inserted if that row does
358not match an existing primary or unique key in the table; otherwise, an UPDATE
359will be performed. The statement allows for separate specification of the
360values to INSERT versus the values for UPDATE.
362SQLAlchemy provides ``ON DUPLICATE KEY UPDATE`` support via the MySQL-specific
363:func:`.mysql.insert()` function, which provides
364the generative method :meth:`~.mysql.Insert.on_duplicate_key_update`::
366 from sqlalchemy.dialects.mysql import insert
368 insert_stmt = insert(my_table).values(
369 id='some_existing_id',
370 data='inserted value')
372 on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(
373 data=insert_stmt.inserted.data,
374 status='U'
375 )
377 conn.execute(on_duplicate_key_stmt)
379Unlike PostgreSQL's "ON CONFLICT" phrase, the "ON DUPLICATE KEY UPDATE"
380phrase will always match on any primary key or unique key, and will always
381perform an UPDATE if there's a match; there are no options for it to raise
382an error or to skip performing an UPDATE.
384``ON DUPLICATE KEY UPDATE`` is used to perform an update of the already
385existing row, using any combination of new values as well as values
386from the proposed insertion. These values are normally specified using
387keyword arguments passed to the
388:meth:`~.mysql.Insert.on_duplicate_key_update`
389given column key values (usually the name of the column, unless it
390specifies :paramref:`_schema.Column.key`
391) as keys and literal or SQL expressions
392as values::
394 on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(
395 data="some data",
396 updated_at=func.current_timestamp(),
397 )
399In a manner similar to that of :meth:`.UpdateBase.values`, other parameter
400forms are accepted, including a single dictionary::
402 on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(
403 {"data": "some data", "updated_at": func.current_timestamp()},
404 )
406as well as a list of 2-tuples, which will automatically provide
407a parameter-ordered UPDATE statement in a manner similar to that described
408at :ref:`updates_order_parameters`. Unlike the :class:`_expression.Update`
409object,
410no special flag is needed to specify the intent since the argument form is
411this context is unambiguous::
413 on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(
414 [
415 ("data", "some data"),
416 ("updated_at", func.current_timestamp()),
417 ],
418 )
420.. versionchanged:: 1.3 support for parameter-ordered UPDATE clause within
421 MySQL ON DUPLICATE KEY UPDATE
423.. warning::
425 The :meth:`_expression.Insert.on_duplicate_key_update`
426 method does **not** take into
427 account Python-side default UPDATE values or generation functions, e.g.
428 e.g. those specified using :paramref:`_schema.Column.onupdate`.
429 These values will not be exercised for an ON DUPLICATE KEY style of UPDATE,
430 unless they are manually specified explicitly in the parameters.
434In order to refer to the proposed insertion row, the special alias
435:attr:`~.mysql.Insert.inserted` is available as an attribute on
436the :class:`.mysql.Insert` object; this object is a
437:class:`_expression.ColumnCollection` which contains all columns of the target
438table::
440 from sqlalchemy.dialects.mysql import insert
442 stmt = insert(my_table).values(
443 id='some_id',
444 data='inserted value',
445 author='jlh')
446 do_update_stmt = stmt.on_duplicate_key_update(
447 data="updated value",
448 author=stmt.inserted.author
449 )
450 conn.execute(do_update_stmt)
452When rendered, the "inserted" namespace will produce the expression
453``VALUES(<columnname>)``.
455.. versionadded:: 1.2 Added support for MySQL ON DUPLICATE KEY UPDATE clause
459rowcount Support
460----------------
462SQLAlchemy standardizes the DBAPI ``cursor.rowcount`` attribute to be the
463usual definition of "number of rows matched by an UPDATE or DELETE" statement.
464This is in contradiction to the default setting on most MySQL DBAPI drivers,
465which is "number of rows actually modified/deleted". For this reason, the
466SQLAlchemy MySQL dialects always add the ``constants.CLIENT.FOUND_ROWS``
467flag, or whatever is equivalent for the target dialect, upon connection.
468This setting is currently hardcoded.
470.. seealso::
472 :attr:`_engine.ResultProxy.rowcount`
475CAST Support
476------------
478MySQL documents the CAST operator as available in version 4.0.2. When using
479the SQLAlchemy :func:`.cast` function, SQLAlchemy
480will not render the CAST token on MySQL before this version, based on server
481version detection, instead rendering the internal expression directly.
483CAST may still not be desirable on an early MySQL version post-4.0.2, as it
484didn't add all datatype support until 4.1.1. If your application falls into
485this narrow area, the behavior of CAST can be controlled using the
486:ref:`sqlalchemy.ext.compiler_toplevel` system, as per the recipe below::
488 from sqlalchemy.sql.expression import Cast
489 from sqlalchemy.ext.compiler import compiles
491 @compiles(Cast, 'mysql')
492 def _check_mysql_version(element, compiler, **kw):
493 if compiler.dialect.server_version_info < (4, 1, 0):
494 return compiler.process(element.clause, **kw)
495 else:
496 return compiler.visit_cast(element, **kw)
498The above function, which only needs to be declared once
499within an application, overrides the compilation of the
500:func:`.cast` construct to check for version 4.1.0 before
501fully rendering CAST; else the internal element of the
502construct is rendered directly.
505.. _mysql_indexes:
507MySQL Specific Index Options
508----------------------------
510MySQL-specific extensions to the :class:`.Index` construct are available.
512Index Length
513~~~~~~~~~~~~~
515MySQL provides an option to create index entries with a certain length, where
516"length" refers to the number of characters or bytes in each value which will
517become part of the index. SQLAlchemy provides this feature via the
518``mysql_length`` parameter::
520 Index('my_index', my_table.c.data, mysql_length=10)
522 Index('a_b_idx', my_table.c.a, my_table.c.b, mysql_length={'a': 4,
523 'b': 9})
525Prefix lengths are given in characters for nonbinary string types and in bytes
526for binary string types. The value passed to the keyword argument *must* be
527either an integer (and, thus, specify the same prefix length value for all
528columns of the index) or a dict in which keys are column names and values are
529prefix length values for corresponding columns. MySQL only allows a length for
530a column of an index if it is for a CHAR, VARCHAR, TEXT, BINARY, VARBINARY and
531BLOB.
533Index Prefixes
534~~~~~~~~~~~~~~
536MySQL storage engines permit you to specify an index prefix when creating
537an index. SQLAlchemy provides this feature via the
538``mysql_prefix`` parameter on :class:`.Index`::
540 Index('my_index', my_table.c.data, mysql_prefix='FULLTEXT')
542The value passed to the keyword argument will be simply passed through to the
543underlying CREATE INDEX, so it *must* be a valid index prefix for your MySQL
544storage engine.
546.. versionadded:: 1.1.5
548.. seealso::
550 `CREATE INDEX <http://dev.mysql.com/doc/refman/5.0/en/create-index.html>`_ - MySQL documentation
552Index Types
553~~~~~~~~~~~~~
555Some MySQL storage engines permit you to specify an index type when creating
556an index or primary key constraint. SQLAlchemy provides this feature via the
557``mysql_using`` parameter on :class:`.Index`::
559 Index('my_index', my_table.c.data, mysql_using='hash')
561As well as the ``mysql_using`` parameter on :class:`.PrimaryKeyConstraint`::
563 PrimaryKeyConstraint("data", mysql_using='hash')
565The value passed to the keyword argument will be simply passed through to the
566underlying CREATE INDEX or PRIMARY KEY clause, so it *must* be a valid index
567type for your MySQL storage engine.
569More information can be found at:
571http://dev.mysql.com/doc/refman/5.0/en/create-index.html
573http://dev.mysql.com/doc/refman/5.0/en/create-table.html
575Index Parsers
576~~~~~~~~~~~~~
578CREATE FULLTEXT INDEX in MySQL also supports a "WITH PARSER" option. This
579is available using the keyword argument ``mysql_with_parser``::
581 Index(
582 'my_index', my_table.c.data,
583 mysql_prefix='FULLTEXT', mysql_with_parser="ngram")
585.. versionadded:: 1.3
588.. _mysql_foreign_keys:
590MySQL Foreign Keys
591------------------
593MySQL's behavior regarding foreign keys has some important caveats.
595Foreign Key Arguments to Avoid
596~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
598MySQL does not support the foreign key arguments "DEFERRABLE", "INITIALLY",
599or "MATCH". Using the ``deferrable`` or ``initially`` keyword argument with
600:class:`_schema.ForeignKeyConstraint` or :class:`_schema.ForeignKey`
601will have the effect of
602these keywords being rendered in a DDL expression, which will then raise an
603error on MySQL. In order to use these keywords on a foreign key while having
604them ignored on a MySQL backend, use a custom compile rule::
606 from sqlalchemy.ext.compiler import compiles
607 from sqlalchemy.schema import ForeignKeyConstraint
609 @compiles(ForeignKeyConstraint, "mysql")
610 def process(element, compiler, **kw):
611 element.deferrable = element.initially = None
612 return compiler.visit_foreign_key_constraint(element, **kw)
614.. versionchanged:: 0.9.0 - the MySQL backend no longer silently ignores
615 the ``deferrable`` or ``initially`` keyword arguments of
616 :class:`_schema.ForeignKeyConstraint` and :class:`_schema.ForeignKey`.
618The "MATCH" keyword is in fact more insidious, and is explicitly disallowed
619by SQLAlchemy in conjunction with the MySQL backend. This argument is
620silently ignored by MySQL, but in addition has the effect of ON UPDATE and ON
621DELETE options also being ignored by the backend. Therefore MATCH should
622never be used with the MySQL backend; as is the case with DEFERRABLE and
623INITIALLY, custom compilation rules can be used to correct a MySQL
624ForeignKeyConstraint at DDL definition time.
626.. versionadded:: 0.9.0 - the MySQL backend will raise a
627 :class:`.CompileError` when the ``match`` keyword is used with
628 :class:`_schema.ForeignKeyConstraint` or :class:`_schema.ForeignKey`.
630Reflection of Foreign Key Constraints
631~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
633Not all MySQL storage engines support foreign keys. When using the
634very common ``MyISAM`` MySQL storage engine, the information loaded by table
635reflection will not include foreign keys. For these tables, you may supply a
636:class:`~sqlalchemy.ForeignKeyConstraint` at reflection time::
638 Table('mytable', metadata,
639 ForeignKeyConstraint(['other_id'], ['othertable.other_id']),
640 autoload=True
641 )
643.. seealso::
645 :ref:`mysql_storage_engines`
647.. _mysql_unique_constraints:
649MySQL Unique Constraints and Reflection
650---------------------------------------
652SQLAlchemy supports both the :class:`.Index` construct with the
653flag ``unique=True``, indicating a UNIQUE index, as well as the
654:class:`.UniqueConstraint` construct, representing a UNIQUE constraint.
655Both objects/syntaxes are supported by MySQL when emitting DDL to create
656these constraints. However, MySQL does not have a unique constraint
657construct that is separate from a unique index; that is, the "UNIQUE"
658constraint on MySQL is equivalent to creating a "UNIQUE INDEX".
660When reflecting these constructs, the
661:meth:`_reflection.Inspector.get_indexes`
662and the :meth:`_reflection.Inspector.get_unique_constraints`
663methods will **both**
664return an entry for a UNIQUE index in MySQL. However, when performing
665full table reflection using ``Table(..., autoload=True)``,
666the :class:`.UniqueConstraint` construct is
667**not** part of the fully reflected :class:`_schema.Table` construct under any
668circumstances; this construct is always represented by a :class:`.Index`
669with the ``unique=True`` setting present in the :attr:`_schema.Table.indexes`
670collection.
673.. _mysql_timestamp_null:
675TIMESTAMP Columns and NULL
676--------------------------
678MySQL historically enforces that a column which specifies the
679TIMESTAMP datatype implicitly includes a default value of
680CURRENT_TIMESTAMP, even though this is not stated, and additionally
681sets the column as NOT NULL, the opposite behavior vs. that of all
682other datatypes::
684 mysql> CREATE TABLE ts_test (
685 -> a INTEGER,
686 -> b INTEGER NOT NULL,
687 -> c TIMESTAMP,
688 -> d TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
689 -> e TIMESTAMP NULL);
690 Query OK, 0 rows affected (0.03 sec)
692 mysql> SHOW CREATE TABLE ts_test;
693 +---------+-----------------------------------------------------
694 | Table | Create Table
695 +---------+-----------------------------------------------------
696 | ts_test | CREATE TABLE `ts_test` (
697 `a` int(11) DEFAULT NULL,
698 `b` int(11) NOT NULL,
699 `c` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
700 `d` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
701 `e` timestamp NULL DEFAULT NULL
702 ) ENGINE=MyISAM DEFAULT CHARSET=latin1
704Above, we see that an INTEGER column defaults to NULL, unless it is specified
705with NOT NULL. But when the column is of type TIMESTAMP, an implicit
706default of CURRENT_TIMESTAMP is generated which also coerces the column
707to be a NOT NULL, even though we did not specify it as such.
709This behavior of MySQL can be changed on the MySQL side using the
710`explicit_defaults_for_timestamp
711<http://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html
712#sysvar_explicit_defaults_for_timestamp>`_ configuration flag introduced in
713MySQL 5.6. With this server setting enabled, TIMESTAMP columns behave like
714any other datatype on the MySQL side with regards to defaults and nullability.
716However, to accommodate the vast majority of MySQL databases that do not
717specify this new flag, SQLAlchemy emits the "NULL" specifier explicitly with
718any TIMESTAMP column that does not specify ``nullable=False``. In order to
719accommodate newer databases that specify ``explicit_defaults_for_timestamp``,
720SQLAlchemy also emits NOT NULL for TIMESTAMP columns that do specify
721``nullable=False``. The following example illustrates::
723 from sqlalchemy import MetaData, Integer, Table, Column, text
724 from sqlalchemy.dialects.mysql import TIMESTAMP
726 m = MetaData()
727 t = Table('ts_test', m,
728 Column('a', Integer),
729 Column('b', Integer, nullable=False),
730 Column('c', TIMESTAMP),
731 Column('d', TIMESTAMP, nullable=False)
732 )
735 from sqlalchemy import create_engine
736 e = create_engine("mysql://scott:tiger@localhost/test", echo=True)
737 m.create_all(e)
739output::
741 CREATE TABLE ts_test (
742 a INTEGER,
743 b INTEGER NOT NULL,
744 c TIMESTAMP NULL,
745 d TIMESTAMP NOT NULL
746 )
748.. versionchanged:: 1.0.0 - SQLAlchemy now renders NULL or NOT NULL in all
749 cases for TIMESTAMP columns, to accommodate
750 ``explicit_defaults_for_timestamp``. Prior to this version, it will
751 not render "NOT NULL" for a TIMESTAMP column that is ``nullable=False``.
753""" # noqa
755from array import array as _array
756from collections import defaultdict
757import re
758import sys
760from sqlalchemy import literal_column
761from sqlalchemy.sql import visitors
762from . import reflection as _reflection
763from .enumerated import ENUM
764from .enumerated import SET
765from .json import JSON
766from .json import JSONIndexType
767from .json import JSONPathType
768from .types import _FloatType
769from .types import _IntegerType
770from .types import _MatchType
771from .types import _NumericType
772from .types import _StringType
773from .types import BIGINT
774from .types import BIT
775from .types import CHAR
776from .types import DATETIME
777from .types import DECIMAL
778from .types import DOUBLE
779from .types import FLOAT
780from .types import INTEGER
781from .types import LONGBLOB
782from .types import LONGTEXT
783from .types import MEDIUMBLOB
784from .types import MEDIUMINT
785from .types import MEDIUMTEXT
786from .types import NCHAR
787from .types import NUMERIC
788from .types import NVARCHAR
789from .types import REAL
790from .types import SMALLINT
791from .types import TEXT
792from .types import TIME
793from .types import TIMESTAMP
794from .types import TINYBLOB
795from .types import TINYINT
796from .types import TINYTEXT
797from .types import VARCHAR
798from .types import YEAR
799from ... import exc
800from ... import log
801from ... import schema as sa_schema
802from ... import sql
803from ... import types as sqltypes
804from ... import util
805from ...engine import default
806from ...engine import reflection
807from ...sql import compiler
808from ...sql import elements
809from ...sql import util as sql_util
810from ...types import BINARY
811from ...types import BLOB
812from ...types import BOOLEAN
813from ...types import DATE
814from ...types import VARBINARY
815from ...util import topological
818RESERVED_WORDS = set(
819 [
820 "accessible",
821 "accessible",
822 "add",
823 "admin",
824 "all",
825 "alter",
826 "analyze",
827 "and",
828 "array", # 8.0
829 "as",
830 "asc",
831 "asensitive",
832 "before",
833 "between",
834 "bigint",
835 "binary",
836 "blob",
837 "both",
838 "by",
839 "call",
840 "cascade",
841 "case",
842 "change",
843 "char",
844 "character",
845 "check",
846 "collate",
847 "column",
848 "columns",
849 "condition",
850 "constraint",
851 "continue",
852 "convert",
853 "create",
854 "cross",
855 "cume_dist",
856 "current_date",
857 "current_time",
858 "current_timestamp",
859 "current_user",
860 "cursor",
861 "database",
862 "databases",
863 "day_hour",
864 "day_microsecond",
865 "day_minute",
866 "day_second",
867 "dec",
868 "decimal",
869 "declare",
870 "default",
871 "delayed",
872 "delete",
873 "desc",
874 "describe",
875 "deterministic",
876 "distinct",
877 "distinctrow",
878 "div",
879 "double",
880 "drop",
881 "dual",
882 "each",
883 "else",
884 "elseif",
885 "empty",
886 "enclosed",
887 "escaped",
888 "except",
889 "exists",
890 "exit",
891 "explain",
892 "false",
893 "fetch",
894 "fields",
895 "first_value",
896 "float",
897 "float4",
898 "float8",
899 "for",
900 "force",
901 "foreign",
902 "from",
903 "fulltext",
904 "function",
905 "general",
906 "generated",
907 "get",
908 "grant",
909 "group",
910 "grouping",
911 "groups",
912 "having",
913 "high_priority",
914 "hour_microsecond",
915 "hour_minute",
916 "hour_second",
917 "if",
918 "ignore",
919 "ignore_server_ids",
920 "in",
921 "index",
922 "infile",
923 "inner",
924 "inout",
925 "insensitive",
926 "insert",
927 "int",
928 "int1",
929 "int2",
930 "int3",
931 "int4",
932 "int8",
933 "integer",
934 "interval",
935 "into",
936 "io_after_gtids",
937 "io_before_gtids",
938 "is",
939 "iterate",
940 "join",
941 "json_table",
942 "key",
943 "keys",
944 "kill",
945 "last_value",
946 "leading",
947 "leave",
948 "left",
949 "like",
950 "limit",
951 "linear",
952 "linear",
953 "lines",
954 "load",
955 "localtime",
956 "localtimestamp",
957 "lock",
958 "long",
959 "longblob",
960 "longtext",
961 "loop",
962 "low_priority",
963 "master_bind",
964 "master_heartbeat_period",
965 "master_ssl_verify_server_cert",
966 "master_ssl_verify_server_cert",
967 "match",
968 "maxvalue",
969 "mediumblob",
970 "mediumint",
971 "mediumtext",
972 "member", # 8.0
973 "middleint",
974 "minute_microsecond",
975 "minute_second",
976 "mod",
977 "modifies",
978 "natural",
979 "no_write_to_binlog",
980 "not",
981 "nth_value",
982 "ntile",
983 "null",
984 "numeric",
985 "of",
986 "on",
987 "one_shot",
988 "optimize",
989 "optimizer_costs",
990 "option",
991 "optionally",
992 "or",
993 "order",
994 "out",
995 "outer",
996 "outfile",
997 "over",
998 "partition",
999 "percent_rank",
1000 "persist",
1001 "persist_only",
1002 "precision",
1003 "primary",
1004 "privileges",
1005 "procedure",
1006 "purge",
1007 "range",
1008 "range",
1009 "rank",
1010 "read",
1011 "read_only",
1012 "read_only",
1013 "read_write",
1014 "read_write", # 5.1
1015 "reads",
1016 "real",
1017 "recursive",
1018 "references",
1019 "regexp",
1020 "release",
1021 "rename",
1022 "repeat",
1023 "replace",
1024 "require",
1025 "resignal",
1026 "restrict",
1027 "return",
1028 "revoke",
1029 "right",
1030 "rlike",
1031 "role",
1032 "row",
1033 "row_number",
1034 "rows",
1035 "schema",
1036 "schemas",
1037 "second_microsecond",
1038 "select",
1039 "sensitive",
1040 "separator",
1041 "set",
1042 "show",
1043 "signal",
1044 "slow", # 5.5
1045 "smallint",
1046 "soname",
1047 "spatial",
1048 "specific",
1049 "sql",
1050 "sql_after_gtids",
1051 "sql_before_gtids", # 5.6
1052 "sql_big_result",
1053 "sql_calc_found_rows",
1054 "sql_small_result",
1055 "sqlexception",
1056 "sqlstate",
1057 "sqlwarning",
1058 "ssl",
1059 "starting",
1060 "stored",
1061 "straight_join",
1062 "system",
1063 "table",
1064 "tables", # 4.1
1065 "terminated",
1066 "then",
1067 "tinyblob",
1068 "tinyint",
1069 "tinytext",
1070 "to",
1071 "trailing",
1072 "trigger",
1073 "true",
1074 "undo",
1075 "union",
1076 "unique",
1077 "unlock",
1078 "unsigned",
1079 "update",
1080 "usage",
1081 "use",
1082 "using",
1083 "utc_date",
1084 "utc_time",
1085 "utc_timestamp",
1086 "values",
1087 "varbinary",
1088 "varchar",
1089 "varcharacter",
1090 "varying",
1091 "virtual", # 5.7
1092 "when",
1093 "where",
1094 "while",
1095 "window", # 8.0
1096 "with",
1097 "write",
1098 "x509",
1099 "xor",
1100 "year_month",
1101 "zerofill", # 5.0
1102 ]
1103)
1105AUTOCOMMIT_RE = re.compile(
1106 r"\s*(?:UPDATE|INSERT|CREATE|DELETE|DROP|ALTER|LOAD +DATA|REPLACE)",
1107 re.I | re.UNICODE,
1108)
1109SET_RE = re.compile(
1110 r"\s*SET\s+(?:(?:GLOBAL|SESSION)\s+)?\w", re.I | re.UNICODE
1111)
1114# old names
1115MSTime = TIME
1116MSSet = SET
1117MSEnum = ENUM
1118MSLongBlob = LONGBLOB
1119MSMediumBlob = MEDIUMBLOB
1120MSTinyBlob = TINYBLOB
1121MSBlob = BLOB
1122MSBinary = BINARY
1123MSVarBinary = VARBINARY
1124MSNChar = NCHAR
1125MSNVarChar = NVARCHAR
1126MSChar = CHAR
1127MSString = VARCHAR
1128MSLongText = LONGTEXT
1129MSMediumText = MEDIUMTEXT
1130MSTinyText = TINYTEXT
1131MSText = TEXT
1132MSYear = YEAR
1133MSTimeStamp = TIMESTAMP
1134MSBit = BIT
1135MSSmallInteger = SMALLINT
1136MSTinyInteger = TINYINT
1137MSMediumInteger = MEDIUMINT
1138MSBigInteger = BIGINT
1139MSNumeric = NUMERIC
1140MSDecimal = DECIMAL
1141MSDouble = DOUBLE
1142MSReal = REAL
1143MSFloat = FLOAT
1144MSInteger = INTEGER
1146colspecs = {
1147 _IntegerType: _IntegerType,
1148 _NumericType: _NumericType,
1149 _FloatType: _FloatType,
1150 sqltypes.Numeric: NUMERIC,
1151 sqltypes.Float: FLOAT,
1152 sqltypes.Time: TIME,
1153 sqltypes.Enum: ENUM,
1154 sqltypes.MatchType: _MatchType,
1155 sqltypes.JSON: JSON,
1156 sqltypes.JSON.JSONIndexType: JSONIndexType,
1157 sqltypes.JSON.JSONPathType: JSONPathType,
1158}
1160# Everything 3.23 through 5.1 excepting OpenGIS types.
1161ischema_names = {
1162 "bigint": BIGINT,
1163 "binary": BINARY,
1164 "bit": BIT,
1165 "blob": BLOB,
1166 "boolean": BOOLEAN,
1167 "char": CHAR,
1168 "date": DATE,
1169 "datetime": DATETIME,
1170 "decimal": DECIMAL,
1171 "double": DOUBLE,
1172 "enum": ENUM,
1173 "fixed": DECIMAL,
1174 "float": FLOAT,
1175 "int": INTEGER,
1176 "integer": INTEGER,
1177 "json": JSON,
1178 "longblob": LONGBLOB,
1179 "longtext": LONGTEXT,
1180 "mediumblob": MEDIUMBLOB,
1181 "mediumint": MEDIUMINT,
1182 "mediumtext": MEDIUMTEXT,
1183 "nchar": NCHAR,
1184 "nvarchar": NVARCHAR,
1185 "numeric": NUMERIC,
1186 "set": SET,
1187 "smallint": SMALLINT,
1188 "text": TEXT,
1189 "time": TIME,
1190 "timestamp": TIMESTAMP,
1191 "tinyblob": TINYBLOB,
1192 "tinyint": TINYINT,
1193 "tinytext": TINYTEXT,
1194 "varbinary": VARBINARY,
1195 "varchar": VARCHAR,
1196 "year": YEAR,
1197}
1200class MySQLExecutionContext(default.DefaultExecutionContext):
1201 def should_autocommit_text(self, statement):
1202 return AUTOCOMMIT_RE.match(statement)
1204 def create_server_side_cursor(self):
1205 if self.dialect.supports_server_side_cursors:
1206 return self._dbapi_connection.cursor(self.dialect._sscursor)
1207 else:
1208 raise NotImplementedError()
1211class MySQLCompiler(compiler.SQLCompiler):
1213 render_table_with_column_in_update_from = True
1214 """Overridden from base SQLCompiler value"""
1216 extract_map = compiler.SQLCompiler.extract_map.copy()
1217 extract_map.update({"milliseconds": "millisecond"})
1219 def visit_random_func(self, fn, **kw):
1220 return "rand%s" % self.function_argspec(fn)
1222 def visit_sysdate_func(self, fn, **kw):
1223 return "SYSDATE()"
1225 def _render_json_extract_from_binary(self, binary, operator, **kw):
1226 # note we are intentionally calling upon the process() calls in the
1227 # order in which they appear in the SQL String as this is used
1228 # by positional parameter rendering
1230 if binary.type._type_affinity is sqltypes.JSON:
1231 return "JSON_EXTRACT(%s, %s)" % (
1232 self.process(binary.left, **kw),
1233 self.process(binary.right, **kw),
1234 )
1236 # for non-JSON, MySQL doesn't handle JSON null at all so it has to
1237 # be explicit
1238 case_expression = "CASE JSON_EXTRACT(%s, %s) WHEN 'null' THEN NULL" % (
1239 self.process(binary.left, **kw),
1240 self.process(binary.right, **kw),
1241 )
1243 if binary.type._type_affinity is sqltypes.Integer:
1244 type_expression = (
1245 "ELSE CAST(JSON_EXTRACT(%s, %s) AS SIGNED INTEGER)"
1246 % (
1247 self.process(binary.left, **kw),
1248 self.process(binary.right, **kw),
1249 )
1250 )
1251 elif binary.type._type_affinity is sqltypes.Numeric:
1252 # FLOAT / REAL not added in MySQL til 8.0.17
1253 type_expression = (
1254 "ELSE CAST(JSON_EXTRACT(%s, %s) AS DECIMAL(10, 6))"
1255 % (
1256 self.process(binary.left, **kw),
1257 self.process(binary.right, **kw),
1258 )
1259 )
1260 elif binary.type._type_affinity is sqltypes.Boolean:
1261 # the NULL handling is particularly weird with boolean, so
1262 # explicitly return true/false constants
1263 type_expression = "WHEN true THEN true ELSE false"
1264 elif binary.type._type_affinity is sqltypes.String:
1265 # this fails with a JSON value that's a four byte unicode
1266 # string. SQLite has the same problem at the moment
1267 type_expression = "ELSE JSON_UNQUOTE(JSON_EXTRACT(%s, %s))" % (
1268 self.process(binary.left, **kw),
1269 self.process(binary.right, **kw),
1270 )
1271 else:
1272 # other affinity....this is not expected right now
1273 type_expression = "ELSE JSON_EXTRACT(%s, %s)"
1275 return case_expression + " " + type_expression + " END"
1277 def visit_json_getitem_op_binary(self, binary, operator, **kw):
1278 return self._render_json_extract_from_binary(binary, operator, **kw)
1280 def visit_json_path_getitem_op_binary(self, binary, operator, **kw):
1281 return self._render_json_extract_from_binary(binary, operator, **kw)
1283 def visit_on_duplicate_key_update(self, on_duplicate, **kw):
1284 if on_duplicate._parameter_ordering:
1285 parameter_ordering = [
1286 elements._column_as_key(key)
1287 for key in on_duplicate._parameter_ordering
1288 ]
1289 ordered_keys = set(parameter_ordering)
1290 cols = [
1291 self.statement.table.c[key]
1292 for key in parameter_ordering
1293 if key in self.statement.table.c
1294 ] + [
1295 c for c in self.statement.table.c if c.key not in ordered_keys
1296 ]
1297 else:
1298 cols = self.statement.table.c
1300 clauses = []
1301 # traverses through all table columns to preserve table column order
1302 for column in (col for col in cols if col.key in on_duplicate.update):
1304 val = on_duplicate.update[column.key]
1306 if elements._is_literal(val):
1307 val = elements.BindParameter(None, val, type_=column.type)
1308 value_text = self.process(val.self_group(), use_schema=False)
1309 else:
1311 def replace(obj):
1312 if (
1313 isinstance(obj, elements.BindParameter)
1314 and obj.type._isnull
1315 ):
1316 obj = obj._clone()
1317 obj.type = column.type
1318 return obj
1319 elif (
1320 isinstance(obj, elements.ColumnClause)
1321 and obj.table is on_duplicate.inserted_alias
1322 ):
1323 obj = literal_column(
1324 "VALUES(" + self.preparer.quote(column.name) + ")"
1325 )
1326 return obj
1327 else:
1328 # element is not replaced
1329 return None
1331 val = visitors.replacement_traverse(val, {}, replace)
1332 value_text = self.process(val.self_group(), use_schema=False)
1334 name_text = self.preparer.quote(column.name)
1335 clauses.append("%s = %s" % (name_text, value_text))
1337 non_matching = set(on_duplicate.update) - set(c.key for c in cols)
1338 if non_matching:
1339 util.warn(
1340 "Additional column names not matching "
1341 "any column keys in table '%s': %s"
1342 % (
1343 self.statement.table.name,
1344 (", ".join("'%s'" % c for c in non_matching)),
1345 )
1346 )
1348 return "ON DUPLICATE KEY UPDATE " + ", ".join(clauses)
1350 def visit_concat_op_binary(self, binary, operator, **kw):
1351 return "concat(%s, %s)" % (
1352 self.process(binary.left, **kw),
1353 self.process(binary.right, **kw),
1354 )
1356 def visit_match_op_binary(self, binary, operator, **kw):
1357 return "MATCH (%s) AGAINST (%s IN BOOLEAN MODE)" % (
1358 self.process(binary.left, **kw),
1359 self.process(binary.right, **kw),
1360 )
1362 def get_from_hint_text(self, table, text):
1363 return text
1365 def visit_typeclause(self, typeclause, type_=None, **kw):
1366 if type_ is None:
1367 type_ = typeclause.type.dialect_impl(self.dialect)
1368 if isinstance(type_, sqltypes.TypeDecorator):
1369 return self.visit_typeclause(typeclause, type_.impl, **kw)
1370 elif isinstance(type_, sqltypes.Integer):
1371 if getattr(type_, "unsigned", False):
1372 return "UNSIGNED INTEGER"
1373 else:
1374 return "SIGNED INTEGER"
1375 elif isinstance(type_, sqltypes.TIMESTAMP):
1376 return "DATETIME"
1377 elif isinstance(
1378 type_,
1379 (
1380 sqltypes.DECIMAL,
1381 sqltypes.DateTime,
1382 sqltypes.Date,
1383 sqltypes.Time,
1384 ),
1385 ):
1386 return self.dialect.type_compiler.process(type_)
1387 elif isinstance(type_, sqltypes.String) and not isinstance(
1388 type_, (ENUM, SET)
1389 ):
1390 adapted = CHAR._adapt_string_for_cast(type_)
1391 return self.dialect.type_compiler.process(adapted)
1392 elif isinstance(type_, sqltypes._Binary):
1393 return "BINARY"
1394 elif isinstance(type_, sqltypes.JSON):
1395 return "JSON"
1396 elif isinstance(type_, sqltypes.NUMERIC):
1397 return self.dialect.type_compiler.process(type_).replace(
1398 "NUMERIC", "DECIMAL"
1399 )
1400 else:
1401 return None
1403 def visit_cast(self, cast, **kw):
1404 # No cast until 4, no decimals until 5.
1405 if not self.dialect._supports_cast:
1406 util.warn(
1407 "Current MySQL version does not support "
1408 "CAST; the CAST will be skipped."
1409 )
1410 return self.process(cast.clause.self_group(), **kw)
1412 type_ = self.process(cast.typeclause)
1413 if type_ is None:
1414 util.warn(
1415 "Datatype %s does not support CAST on MySQL; "
1416 "the CAST will be skipped."
1417 % self.dialect.type_compiler.process(cast.typeclause.type)
1418 )
1419 return self.process(cast.clause.self_group(), **kw)
1421 return "CAST(%s AS %s)" % (self.process(cast.clause, **kw), type_)
1423 def render_literal_value(self, value, type_):
1424 value = super(MySQLCompiler, self).render_literal_value(value, type_)
1425 if self.dialect._backslash_escapes:
1426 value = value.replace("\\", "\\\\")
1427 return value
1429 # override native_boolean=False behavior here, as
1430 # MySQL still supports native boolean
1431 def visit_true(self, element, **kw):
1432 return "true"
1434 def visit_false(self, element, **kw):
1435 return "false"
1437 def get_select_precolumns(self, select, **kw):
1438 """Add special MySQL keywords in place of DISTINCT.
1440 .. note::
1442 this usage is deprecated. :meth:`_expression.Select.prefix_with`
1443 should be used for special keywords at the start
1444 of a SELECT.
1446 """
1447 if isinstance(select._distinct, util.string_types):
1448 return select._distinct.upper() + " "
1449 elif select._distinct:
1450 return "DISTINCT "
1451 else:
1452 return ""
1454 def visit_join(self, join, asfrom=False, **kwargs):
1455 if join.full:
1456 join_type = " FULL OUTER JOIN "
1457 elif join.isouter:
1458 join_type = " LEFT OUTER JOIN "
1459 else:
1460 join_type = " INNER JOIN "
1462 return "".join(
1463 (
1464 self.process(join.left, asfrom=True, **kwargs),
1465 join_type,
1466 self.process(join.right, asfrom=True, **kwargs),
1467 " ON ",
1468 self.process(join.onclause, **kwargs),
1469 )
1470 )
1472 def for_update_clause(self, select, **kw):
1473 if select._for_update_arg.read:
1474 tmp = " LOCK IN SHARE MODE"
1475 else:
1476 tmp = " FOR UPDATE"
1478 if select._for_update_arg.of and self.dialect.supports_for_update_of:
1480 tables = util.OrderedSet()
1481 for c in select._for_update_arg.of:
1482 tables.update(sql_util.surface_selectables_only(c))
1484 tmp += " OF " + ", ".join(
1485 self.process(table, ashint=True, use_schema=False, **kw)
1486 for table in tables
1487 )
1489 if select._for_update_arg.nowait:
1490 tmp += " NOWAIT"
1492 if select._for_update_arg.skip_locked and self.dialect._is_mysql:
1493 tmp += " SKIP LOCKED"
1495 return tmp
1497 def limit_clause(self, select, **kw):
1498 # MySQL supports:
1499 # LIMIT <limit>
1500 # LIMIT <offset>, <limit>
1501 # and in server versions > 3.3:
1502 # LIMIT <limit> OFFSET <offset>
1503 # The latter is more readable for offsets but we're stuck with the
1504 # former until we can refine dialects by server revision.
1506 limit_clause, offset_clause = (
1507 select._limit_clause,
1508 select._offset_clause,
1509 )
1511 if limit_clause is None and offset_clause is None:
1512 return ""
1513 elif offset_clause is not None:
1514 # As suggested by the MySQL docs, need to apply an
1515 # artificial limit if one wasn't provided
1516 # http://dev.mysql.com/doc/refman/5.0/en/select.html
1517 if limit_clause is None:
1518 # hardwire the upper limit. Currently
1519 # needed by OurSQL with Python 3
1520 # (https://bugs.launchpad.net/oursql/+bug/686232),
1521 # but also is consistent with the usage of the upper
1522 # bound as part of MySQL's "syntax" for OFFSET with
1523 # no LIMIT
1524 return " \n LIMIT %s, %s" % (
1525 self.process(offset_clause, **kw),
1526 "18446744073709551615",
1527 )
1528 else:
1529 return " \n LIMIT %s, %s" % (
1530 self.process(offset_clause, **kw),
1531 self.process(limit_clause, **kw),
1532 )
1533 else:
1534 # No offset provided, so just use the limit
1535 return " \n LIMIT %s" % (self.process(limit_clause, **kw),)
1537 def update_limit_clause(self, update_stmt):
1538 limit = update_stmt.kwargs.get("%s_limit" % self.dialect.name, None)
1539 if limit:
1540 return "LIMIT %s" % limit
1541 else:
1542 return None
1544 def update_tables_clause(self, update_stmt, from_table, extra_froms, **kw):
1545 return ", ".join(
1546 t._compiler_dispatch(self, asfrom=True, **kw)
1547 for t in [from_table] + list(extra_froms)
1548 )
1550 def update_from_clause(
1551 self, update_stmt, from_table, extra_froms, from_hints, **kw
1552 ):
1553 return None
1555 def delete_table_clause(self, delete_stmt, from_table, extra_froms):
1556 """If we have extra froms make sure we render any alias as hint."""
1557 ashint = False
1558 if extra_froms:
1559 ashint = True
1560 return from_table._compiler_dispatch(
1561 self, asfrom=True, iscrud=True, ashint=ashint
1562 )
1564 def delete_extra_from_clause(
1565 self, delete_stmt, from_table, extra_froms, from_hints, **kw
1566 ):
1567 """Render the DELETE .. USING clause specific to MySQL."""
1568 return "USING " + ", ".join(
1569 t._compiler_dispatch(self, asfrom=True, fromhints=from_hints, **kw)
1570 for t in [from_table] + extra_froms
1571 )
1573 def visit_empty_set_expr(self, element_types):
1574 return (
1575 "SELECT %(outer)s FROM (SELECT %(inner)s) "
1576 "as _empty_set WHERE 1!=1"
1577 % {
1578 "inner": ", ".join(
1579 "1 AS _in_%s" % idx
1580 for idx, type_ in enumerate(element_types)
1581 ),
1582 "outer": ", ".join(
1583 "_in_%s" % idx for idx, type_ in enumerate(element_types)
1584 ),
1585 }
1586 )
1588 def visit_is_distinct_from_binary(self, binary, operator, **kw):
1589 return "NOT (%s <=> %s)" % (
1590 self.process(binary.left),
1591 self.process(binary.right),
1592 )
1594 def visit_isnot_distinct_from_binary(self, binary, operator, **kw):
1595 return "%s <=> %s" % (
1596 self.process(binary.left),
1597 self.process(binary.right),
1598 )
1601class MySQLDDLCompiler(compiler.DDLCompiler):
1602 def get_column_specification(self, column, **kw):
1603 """Builds column DDL."""
1605 colspec = [
1606 self.preparer.format_column(column),
1607 self.dialect.type_compiler.process(
1608 column.type, type_expression=column
1609 ),
1610 ]
1612 if column.computed is not None:
1613 colspec.append(self.process(column.computed))
1615 is_timestamp = isinstance(
1616 column.type._unwrapped_dialect_impl(self.dialect),
1617 sqltypes.TIMESTAMP,
1618 )
1620 if not column.nullable:
1621 colspec.append("NOT NULL")
1623 # see: http://docs.sqlalchemy.org/en/latest/dialects/
1624 # mysql.html#mysql_timestamp_null
1625 elif column.nullable and is_timestamp:
1626 colspec.append("NULL")
1628 default = self.get_column_default_string(column)
1629 if default is not None:
1630 colspec.append("DEFAULT " + default)
1632 comment = column.comment
1633 if comment is not None:
1634 literal = self.sql_compiler.render_literal_value(
1635 comment, sqltypes.String()
1636 )
1637 colspec.append("COMMENT " + literal)
1639 if (
1640 column.table is not None
1641 and column is column.table._autoincrement_column
1642 and column.server_default is None
1643 ):
1644 colspec.append("AUTO_INCREMENT")
1646 return " ".join(colspec)
1648 def post_create_table(self, table):
1649 """Build table-level CREATE options like ENGINE and COLLATE."""
1651 table_opts = []
1653 opts = dict(
1654 (k[len(self.dialect.name) + 1 :].upper(), v)
1655 for k, v in table.kwargs.items()
1656 if k.startswith("%s_" % self.dialect.name)
1657 )
1659 if table.comment is not None:
1660 opts["COMMENT"] = table.comment
1662 partition_options = [
1663 "PARTITION_BY",
1664 "PARTITIONS",
1665 "SUBPARTITIONS",
1666 "SUBPARTITION_BY",
1667 ]
1669 nonpart_options = set(opts).difference(partition_options)
1670 part_options = set(opts).intersection(partition_options)
1672 for opt in topological.sort(
1673 [
1674 ("DEFAULT_CHARSET", "COLLATE"),
1675 ("DEFAULT_CHARACTER_SET", "COLLATE"),
1676 ],
1677 nonpart_options,
1678 ):
1679 arg = opts[opt]
1680 if opt in _reflection._options_of_type_string:
1682 arg = self.sql_compiler.render_literal_value(
1683 arg, sqltypes.String()
1684 )
1686 if opt in (
1687 "DATA_DIRECTORY",
1688 "INDEX_DIRECTORY",
1689 "DEFAULT_CHARACTER_SET",
1690 "CHARACTER_SET",
1691 "DEFAULT_CHARSET",
1692 "DEFAULT_COLLATE",
1693 ):
1694 opt = opt.replace("_", " ")
1696 joiner = "="
1697 if opt in (
1698 "TABLESPACE",
1699 "DEFAULT CHARACTER SET",
1700 "CHARACTER SET",
1701 "COLLATE",
1702 ):
1703 joiner = " "
1705 table_opts.append(joiner.join((opt, arg)))
1707 for opt in topological.sort(
1708 [
1709 ("PARTITION_BY", "PARTITIONS"),
1710 ("PARTITION_BY", "SUBPARTITION_BY"),
1711 ("PARTITION_BY", "SUBPARTITIONS"),
1712 ("PARTITIONS", "SUBPARTITIONS"),
1713 ("PARTITIONS", "SUBPARTITION_BY"),
1714 ("SUBPARTITION_BY", "SUBPARTITIONS"),
1715 ],
1716 part_options,
1717 ):
1718 arg = opts[opt]
1719 if opt in _reflection._options_of_type_string:
1720 arg = self.sql_compiler.render_literal_value(
1721 arg, sqltypes.String()
1722 )
1724 opt = opt.replace("_", " ")
1725 joiner = " "
1727 table_opts.append(joiner.join((opt, arg)))
1729 return " ".join(table_opts)
1731 def visit_create_index(self, create, **kw):
1732 index = create.element
1733 self._verify_index_table(index)
1734 preparer = self.preparer
1735 table = preparer.format_table(index.table)
1736 columns = [
1737 self.sql_compiler.process(
1738 expr, include_table=False, literal_binds=True
1739 )
1740 for expr in index.expressions
1741 ]
1743 name = self._prepared_index_name(index)
1745 text = "CREATE "
1746 if index.unique:
1747 text += "UNIQUE "
1749 index_prefix = index.kwargs.get("mysql_prefix", None)
1750 if index_prefix:
1751 text += index_prefix + " "
1753 text += "INDEX %s ON %s " % (name, table)
1755 length = index.dialect_options["mysql"]["length"]
1756 if length is not None:
1758 if isinstance(length, dict):
1759 # length value can be a (column_name --> integer value)
1760 # mapping specifying the prefix length for each column of the
1761 # index
1762 columns = ", ".join(
1763 "%s(%d)" % (expr, length[col.name])
1764 if col.name in length
1765 else (
1766 "%s(%d)" % (expr, length[expr])
1767 if expr in length
1768 else "%s" % expr
1769 )
1770 for col, expr in zip(index.expressions, columns)
1771 )
1772 else:
1773 # or can be an integer value specifying the same
1774 # prefix length for all columns of the index
1775 columns = ", ".join(
1776 "%s(%d)" % (col, length) for col in columns
1777 )
1778 else:
1779 columns = ", ".join(columns)
1780 text += "(%s)" % columns
1782 parser = index.dialect_options["mysql"]["with_parser"]
1783 if parser is not None:
1784 text += " WITH PARSER %s" % (parser,)
1786 using = index.dialect_options["mysql"]["using"]
1787 if using is not None:
1788 text += " USING %s" % (preparer.quote(using))
1790 return text
1792 def visit_primary_key_constraint(self, constraint):
1793 text = super(MySQLDDLCompiler, self).visit_primary_key_constraint(
1794 constraint
1795 )
1796 using = constraint.dialect_options["mysql"]["using"]
1797 if using:
1798 text += " USING %s" % (self.preparer.quote(using))
1799 return text
1801 def visit_drop_index(self, drop):
1802 index = drop.element
1804 return "\nDROP INDEX %s ON %s" % (
1805 self._prepared_index_name(index, include_schema=False),
1806 self.preparer.format_table(index.table),
1807 )
1809 def visit_drop_constraint(self, drop):
1810 constraint = drop.element
1811 if isinstance(constraint, sa_schema.ForeignKeyConstraint):
1812 qual = "FOREIGN KEY "
1813 const = self.preparer.format_constraint(constraint)
1814 elif isinstance(constraint, sa_schema.PrimaryKeyConstraint):
1815 qual = "PRIMARY KEY "
1816 const = ""
1817 elif isinstance(constraint, sa_schema.UniqueConstraint):
1818 qual = "INDEX "
1819 const = self.preparer.format_constraint(constraint)
1820 elif isinstance(constraint, sa_schema.CheckConstraint):
1821 if self.dialect._is_mariadb:
1822 qual = "CONSTRAINT "
1823 else:
1824 qual = "CHECK "
1825 const = self.preparer.format_constraint(constraint)
1826 else:
1827 qual = ""
1828 const = self.preparer.format_constraint(constraint)
1829 return "ALTER TABLE %s DROP %s%s" % (
1830 self.preparer.format_table(constraint.table),
1831 qual,
1832 const,
1833 )
1835 def define_constraint_match(self, constraint):
1836 if constraint.match is not None:
1837 raise exc.CompileError(
1838 "MySQL ignores the 'MATCH' keyword while at the same time "
1839 "causes ON UPDATE/ON DELETE clauses to be ignored."
1840 )
1841 return ""
1843 def visit_set_table_comment(self, create):
1844 return "ALTER TABLE %s COMMENT %s" % (
1845 self.preparer.format_table(create.element),
1846 self.sql_compiler.render_literal_value(
1847 create.element.comment, sqltypes.String()
1848 ),
1849 )
1851 def visit_drop_table_comment(self, create):
1852 return "ALTER TABLE %s COMMENT ''" % (
1853 self.preparer.format_table(create.element)
1854 )
1856 def visit_set_column_comment(self, create):
1857 return "ALTER TABLE %s CHANGE %s %s" % (
1858 self.preparer.format_table(create.element.table),
1859 self.preparer.format_column(create.element),
1860 self.get_column_specification(create.element),
1861 )
1864class MySQLTypeCompiler(compiler.GenericTypeCompiler):
1865 def _extend_numeric(self, type_, spec):
1866 "Extend a numeric-type declaration with MySQL specific extensions."
1868 if not self._mysql_type(type_):
1869 return spec
1871 if type_.unsigned:
1872 spec += " UNSIGNED"
1873 if type_.zerofill:
1874 spec += " ZEROFILL"
1875 return spec
1877 def _extend_string(self, type_, defaults, spec):
1878 """Extend a string-type declaration with standard SQL CHARACTER SET /
1879 COLLATE annotations and MySQL specific extensions.
1881 """
1883 def attr(name):
1884 return getattr(type_, name, defaults.get(name))
1886 if attr("charset"):
1887 charset = "CHARACTER SET %s" % attr("charset")
1888 elif attr("ascii"):
1889 charset = "ASCII"
1890 elif attr("unicode"):
1891 charset = "UNICODE"
1892 else:
1893 charset = None
1895 if attr("collation"):
1896 collation = "COLLATE %s" % type_.collation
1897 elif attr("binary"):
1898 collation = "BINARY"
1899 else:
1900 collation = None
1902 if attr("national"):
1903 # NATIONAL (aka NCHAR/NVARCHAR) trumps charsets.
1904 return " ".join(
1905 [c for c in ("NATIONAL", spec, collation) if c is not None]
1906 )
1907 return " ".join(
1908 [c for c in (spec, charset, collation) if c is not None]
1909 )
1911 def _mysql_type(self, type_):
1912 return isinstance(type_, (_StringType, _NumericType))
1914 def visit_NUMERIC(self, type_, **kw):
1915 if type_.precision is None:
1916 return self._extend_numeric(type_, "NUMERIC")
1917 elif type_.scale is None:
1918 return self._extend_numeric(
1919 type_,
1920 "NUMERIC(%(precision)s)" % {"precision": type_.precision},
1921 )
1922 else:
1923 return self._extend_numeric(
1924 type_,
1925 "NUMERIC(%(precision)s, %(scale)s)"
1926 % {"precision": type_.precision, "scale": type_.scale},
1927 )
1929 def visit_DECIMAL(self, type_, **kw):
1930 if type_.precision is None:
1931 return self._extend_numeric(type_, "DECIMAL")
1932 elif type_.scale is None:
1933 return self._extend_numeric(
1934 type_,
1935 "DECIMAL(%(precision)s)" % {"precision": type_.precision},
1936 )
1937 else:
1938 return self._extend_numeric(
1939 type_,
1940 "DECIMAL(%(precision)s, %(scale)s)"
1941 % {"precision": type_.precision, "scale": type_.scale},
1942 )
1944 def visit_DOUBLE(self, type_, **kw):
1945 if type_.precision is not None and type_.scale is not None:
1946 return self._extend_numeric(
1947 type_,
1948 "DOUBLE(%(precision)s, %(scale)s)"
1949 % {"precision": type_.precision, "scale": type_.scale},
1950 )
1951 else:
1952 return self._extend_numeric(type_, "DOUBLE")
1954 def visit_REAL(self, type_, **kw):
1955 if type_.precision is not None and type_.scale is not None:
1956 return self._extend_numeric(
1957 type_,
1958 "REAL(%(precision)s, %(scale)s)"
1959 % {"precision": type_.precision, "scale": type_.scale},
1960 )
1961 else:
1962 return self._extend_numeric(type_, "REAL")
1964 def visit_FLOAT(self, type_, **kw):
1965 if (
1966 self._mysql_type(type_)
1967 and type_.scale is not None
1968 and type_.precision is not None
1969 ):
1970 return self._extend_numeric(
1971 type_, "FLOAT(%s, %s)" % (type_.precision, type_.scale)
1972 )
1973 elif type_.precision is not None:
1974 return self._extend_numeric(
1975 type_, "FLOAT(%s)" % (type_.precision,)
1976 )
1977 else:
1978 return self._extend_numeric(type_, "FLOAT")
1980 def visit_INTEGER(self, type_, **kw):
1981 if self._mysql_type(type_) and type_.display_width is not None:
1982 return self._extend_numeric(
1983 type_,
1984 "INTEGER(%(display_width)s)"
1985 % {"display_width": type_.display_width},
1986 )
1987 else:
1988 return self._extend_numeric(type_, "INTEGER")
1990 def visit_BIGINT(self, type_, **kw):
1991 if self._mysql_type(type_) and type_.display_width is not None:
1992 return self._extend_numeric(
1993 type_,
1994 "BIGINT(%(display_width)s)"
1995 % {"display_width": type_.display_width},
1996 )
1997 else:
1998 return self._extend_numeric(type_, "BIGINT")
2000 def visit_MEDIUMINT(self, type_, **kw):
2001 if self._mysql_type(type_) and type_.display_width is not None:
2002 return self._extend_numeric(
2003 type_,
2004 "MEDIUMINT(%(display_width)s)"
2005 % {"display_width": type_.display_width},
2006 )
2007 else:
2008 return self._extend_numeric(type_, "MEDIUMINT")
2010 def visit_TINYINT(self, type_, **kw):
2011 if self._mysql_type(type_) and type_.display_width is not None:
2012 return self._extend_numeric(
2013 type_, "TINYINT(%s)" % type_.display_width
2014 )
2015 else:
2016 return self._extend_numeric(type_, "TINYINT")
2018 def visit_SMALLINT(self, type_, **kw):
2019 if self._mysql_type(type_) and type_.display_width is not None:
2020 return self._extend_numeric(
2021 type_,
2022 "SMALLINT(%(display_width)s)"
2023 % {"display_width": type_.display_width},
2024 )
2025 else:
2026 return self._extend_numeric(type_, "SMALLINT")
2028 def visit_BIT(self, type_, **kw):
2029 if type_.length is not None:
2030 return "BIT(%s)" % type_.length
2031 else:
2032 return "BIT"
2034 def visit_DATETIME(self, type_, **kw):
2035 if getattr(type_, "fsp", None):
2036 return "DATETIME(%d)" % type_.fsp
2037 else:
2038 return "DATETIME"
2040 def visit_DATE(self, type_, **kw):
2041 return "DATE"
2043 def visit_TIME(self, type_, **kw):
2044 if getattr(type_, "fsp", None):
2045 return "TIME(%d)" % type_.fsp
2046 else:
2047 return "TIME"
2049 def visit_TIMESTAMP(self, type_, **kw):
2050 if getattr(type_, "fsp", None):
2051 return "TIMESTAMP(%d)" % type_.fsp
2052 else:
2053 return "TIMESTAMP"
2055 def visit_YEAR(self, type_, **kw):
2056 if type_.display_width is None:
2057 return "YEAR"
2058 else:
2059 return "YEAR(%s)" % type_.display_width
2061 def visit_TEXT(self, type_, **kw):
2062 if type_.length:
2063 return self._extend_string(type_, {}, "TEXT(%d)" % type_.length)
2064 else:
2065 return self._extend_string(type_, {}, "TEXT")
2067 def visit_TINYTEXT(self, type_, **kw):
2068 return self._extend_string(type_, {}, "TINYTEXT")
2070 def visit_MEDIUMTEXT(self, type_, **kw):
2071 return self._extend_string(type_, {}, "MEDIUMTEXT")
2073 def visit_LONGTEXT(self, type_, **kw):
2074 return self._extend_string(type_, {}, "LONGTEXT")
2076 def visit_VARCHAR(self, type_, **kw):
2077 if type_.length:
2078 return self._extend_string(type_, {}, "VARCHAR(%d)" % type_.length)
2079 else:
2080 raise exc.CompileError(
2081 "VARCHAR requires a length on dialect %s" % self.dialect.name
2082 )
2084 def visit_CHAR(self, type_, **kw):
2085 if type_.length:
2086 return self._extend_string(
2087 type_, {}, "CHAR(%(length)s)" % {"length": type_.length}
2088 )
2089 else:
2090 return self._extend_string(type_, {}, "CHAR")
2092 def visit_NVARCHAR(self, type_, **kw):
2093 # We'll actually generate the equiv. "NATIONAL VARCHAR" instead
2094 # of "NVARCHAR".
2095 if type_.length:
2096 return self._extend_string(
2097 type_,
2098 {"national": True},
2099 "VARCHAR(%(length)s)" % {"length": type_.length},
2100 )
2101 else:
2102 raise exc.CompileError(
2103 "NVARCHAR requires a length on dialect %s" % self.dialect.name
2104 )
2106 def visit_NCHAR(self, type_, **kw):
2107 # We'll actually generate the equiv.
2108 # "NATIONAL CHAR" instead of "NCHAR".
2109 if type_.length:
2110 return self._extend_string(
2111 type_,
2112 {"national": True},
2113 "CHAR(%(length)s)" % {"length": type_.length},
2114 )
2115 else:
2116 return self._extend_string(type_, {"national": True}, "CHAR")
2118 def visit_VARBINARY(self, type_, **kw):
2119 return "VARBINARY(%d)" % type_.length
2121 def visit_JSON(self, type_, **kw):
2122 return "JSON"
2124 def visit_large_binary(self, type_, **kw):
2125 return self.visit_BLOB(type_)
2127 def visit_enum(self, type_, **kw):
2128 if not type_.native_enum:
2129 return super(MySQLTypeCompiler, self).visit_enum(type_)
2130 else:
2131 return self._visit_enumerated_values("ENUM", type_, type_.enums)
2133 def visit_BLOB(self, type_, **kw):
2134 if type_.length:
2135 return "BLOB(%d)" % type_.length
2136 else:
2137 return "BLOB"
2139 def visit_TINYBLOB(self, type_, **kw):
2140 return "TINYBLOB"
2142 def visit_MEDIUMBLOB(self, type_, **kw):
2143 return "MEDIUMBLOB"
2145 def visit_LONGBLOB(self, type_, **kw):
2146 return "LONGBLOB"
2148 def _visit_enumerated_values(self, name, type_, enumerated_values):
2149 quoted_enums = []
2150 for e in enumerated_values:
2151 quoted_enums.append("'%s'" % e.replace("'", "''"))
2152 return self._extend_string(
2153 type_, {}, "%s(%s)" % (name, ",".join(quoted_enums))
2154 )
2156 def visit_ENUM(self, type_, **kw):
2157 return self._visit_enumerated_values(
2158 "ENUM", type_, type_._enumerated_values
2159 )
2161 def visit_SET(self, type_, **kw):
2162 return self._visit_enumerated_values(
2163 "SET", type_, type_._enumerated_values
2164 )
2166 def visit_BOOLEAN(self, type_, **kw):
2167 return "BOOL"
2170class MySQLIdentifierPreparer(compiler.IdentifierPreparer):
2172 reserved_words = RESERVED_WORDS
2174 def __init__(self, dialect, server_ansiquotes=False, **kw):
2175 if not server_ansiquotes:
2176 quote = "`"
2177 else:
2178 quote = '"'
2180 super(MySQLIdentifierPreparer, self).__init__(
2181 dialect, initial_quote=quote, escape_quote=quote
2182 )
2184 def _quote_free_identifiers(self, *ids):
2185 """Unilaterally identifier-quote any number of strings."""
2187 return tuple([self.quote_identifier(i) for i in ids if i is not None])
2190@log.class_logger
2191class MySQLDialect(default.DefaultDialect):
2192 """Details of the MySQL dialect.
2193 Not used directly in application code.
2194 """
2196 name = "mysql"
2197 supports_alter = True
2199 # MySQL has no true "boolean" type; we
2200 # allow for the "true" and "false" keywords, however
2201 supports_native_boolean = False
2203 # identifiers are 64, however aliases can be 255...
2204 max_identifier_length = 255
2205 max_index_name_length = 64
2207 supports_native_enum = True
2209 supports_for_update_of = False # default for MySQL ...
2210 # ... may be updated to True for MySQL 8+ in initialize()
2212 supports_sane_rowcount = True
2213 supports_sane_multi_rowcount = False
2214 supports_multivalues_insert = True
2216 supports_comments = True
2217 inline_comments = True
2218 default_paramstyle = "format"
2219 colspecs = colspecs
2221 cte_follows_insert = True
2223 statement_compiler = MySQLCompiler
2224 ddl_compiler = MySQLDDLCompiler
2225 type_compiler = MySQLTypeCompiler
2226 ischema_names = ischema_names
2227 preparer = MySQLIdentifierPreparer
2229 # default SQL compilation settings -
2230 # these are modified upon initialize(),
2231 # i.e. first connect
2232 _backslash_escapes = True
2233 _server_ansiquotes = False
2235 construct_arguments = [
2236 (sa_schema.Table, {"*": None}),
2237 (sql.Update, {"limit": None}),
2238 (sa_schema.PrimaryKeyConstraint, {"using": None}),
2239 (
2240 sa_schema.Index,
2241 {
2242 "using": None,
2243 "length": None,
2244 "prefix": None,
2245 "with_parser": None,
2246 },
2247 ),
2248 ]
2250 def __init__(
2251 self,
2252 isolation_level=None,
2253 json_serializer=None,
2254 json_deserializer=None,
2255 **kwargs
2256 ):
2257 kwargs.pop("use_ansiquotes", None) # legacy
2258 default.DefaultDialect.__init__(self, **kwargs)
2259 self.isolation_level = isolation_level
2260 self._json_serializer = json_serializer
2261 self._json_deserializer = json_deserializer
2263 def on_connect(self):
2264 if self.isolation_level is not None:
2266 def connect(conn):
2267 self.set_isolation_level(conn, self.isolation_level)
2269 return connect
2270 else:
2271 return None
2273 _isolation_lookup = set(
2274 [
2275 "SERIALIZABLE",
2276 "READ UNCOMMITTED",
2277 "READ COMMITTED",
2278 "REPEATABLE READ",
2279 ]
2280 )
2282 def set_isolation_level(self, connection, level):
2283 level = level.replace("_", " ")
2285 # adjust for ConnectionFairy being present
2286 # allows attribute set e.g. "connection.autocommit = True"
2287 # to work properly
2288 if hasattr(connection, "connection"):
2289 connection = connection.connection
2291 self._set_isolation_level(connection, level)
2293 def _set_isolation_level(self, connection, level):
2294 if level not in self._isolation_lookup:
2295 raise exc.ArgumentError(
2296 "Invalid value '%s' for isolation_level. "
2297 "Valid isolation levels for %s are %s"
2298 % (level, self.name, ", ".join(self._isolation_lookup))
2299 )
2300 cursor = connection.cursor()
2301 cursor.execute("SET SESSION TRANSACTION ISOLATION LEVEL %s" % level)
2302 cursor.execute("COMMIT")
2303 cursor.close()
2305 def get_isolation_level(self, connection):
2306 cursor = connection.cursor()
2307 if self._is_mysql and self.server_version_info >= (5, 7, 20):
2308 cursor.execute("SELECT @@transaction_isolation")
2309 else:
2310 cursor.execute("SELECT @@tx_isolation")
2311 row = cursor.fetchone()
2312 if row is None:
2313 util.warn(
2314 "Could not retrieve transaction isolation level for MySQL "
2315 "connection."
2316 )
2317 raise NotImplementedError()
2318 val = row[0]
2319 cursor.close()
2320 if util.py3k and isinstance(val, bytes):
2321 val = val.decode()
2322 return val.upper().replace("-", " ")
2324 def _get_server_version_info(self, connection):
2325 # get database server version info explicitly over the wire
2326 # to avoid proxy servers like MaxScale getting in the
2327 # way with their own values, see #4205
2328 dbapi_con = connection.connection
2329 cursor = dbapi_con.cursor()
2330 cursor.execute("SELECT VERSION()")
2331 val = cursor.fetchone()[0]
2332 cursor.close()
2333 if util.py3k and isinstance(val, bytes):
2334 val = val.decode()
2336 return self._parse_server_version(val)
2338 def _parse_server_version(self, val):
2339 version = []
2340 r = re.compile(r"[.\-]")
2341 for n in r.split(val):
2342 try:
2343 version.append(int(n))
2344 except ValueError:
2345 mariadb = re.match(r"(.*)(MariaDB)(.*)", n)
2346 if mariadb:
2347 version.extend(g for g in mariadb.groups() if g)
2348 else:
2349 version.append(n)
2350 return tuple(version)
2352 def do_commit(self, dbapi_connection):
2353 """Execute a COMMIT."""
2355 # COMMIT/ROLLBACK were introduced in 3.23.15.
2356 # Yes, we have at least one user who has to talk to these old
2357 # versions!
2358 #
2359 # Ignore commit/rollback if support isn't present, otherwise even
2360 # basic operations via autocommit fail.
2361 try:
2362 dbapi_connection.commit()
2363 except Exception:
2364 if self.server_version_info < (3, 23, 15):
2365 args = sys.exc_info()[1].args
2366 if args and args[0] == 1064:
2367 return
2368 raise
2370 def do_rollback(self, dbapi_connection):
2371 """Execute a ROLLBACK."""
2373 try:
2374 dbapi_connection.rollback()
2375 except Exception:
2376 if self.server_version_info < (3, 23, 15):
2377 args = sys.exc_info()[1].args
2378 if args and args[0] == 1064:
2379 return
2380 raise
2382 def do_begin_twophase(self, connection, xid):
2383 connection.execute(sql.text("XA BEGIN :xid"), xid=xid)
2385 def do_prepare_twophase(self, connection, xid):
2386 connection.execute(sql.text("XA END :xid"), xid=xid)
2387 connection.execute(sql.text("XA PREPARE :xid"), xid=xid)
2389 def do_rollback_twophase(
2390 self, connection, xid, is_prepared=True, recover=False
2391 ):
2392 if not is_prepared:
2393 connection.execute(sql.text("XA END :xid"), xid=xid)
2394 connection.execute(sql.text("XA ROLLBACK :xid"), xid=xid)
2396 def do_commit_twophase(
2397 self, connection, xid, is_prepared=True, recover=False
2398 ):
2399 if not is_prepared:
2400 self.do_prepare_twophase(connection, xid)
2401 connection.execute(sql.text("XA COMMIT :xid"), xid=xid)
2403 def do_recover_twophase(self, connection):
2404 resultset = connection.execute("XA RECOVER")
2405 return [row["data"][0 : row["gtrid_length"]] for row in resultset]
2407 def is_disconnect(self, e, connection, cursor):
2408 if isinstance(
2409 e, (self.dbapi.OperationalError, self.dbapi.ProgrammingError)
2410 ):
2411 return self._extract_error_code(e) in (
2412 2006,
2413 2013,
2414 2014,
2415 2045,
2416 2055,
2417 )
2418 elif isinstance(
2419 e, (self.dbapi.InterfaceError, self.dbapi.InternalError)
2420 ):
2421 # if underlying connection is closed,
2422 # this is the error you get
2423 return "(0, '')" in str(e)
2424 else:
2425 return False
2427 def _compat_fetchall(self, rp, charset=None):
2428 """Proxy result rows to smooth over MySQL-Python driver
2429 inconsistencies."""
2431 return [_DecodingRowProxy(row, charset) for row in rp.fetchall()]
2433 def _compat_fetchone(self, rp, charset=None):
2434 """Proxy a result row to smooth over MySQL-Python driver
2435 inconsistencies."""
2437 row = rp.fetchone()
2438 if row:
2439 return _DecodingRowProxy(row, charset)
2440 else:
2441 return None
2443 def _compat_first(self, rp, charset=None):
2444 """Proxy a result row to smooth over MySQL-Python driver
2445 inconsistencies."""
2447 row = rp.first()
2448 if row:
2449 return _DecodingRowProxy(row, charset)
2450 else:
2451 return None
2453 def _extract_error_code(self, exception):
2454 raise NotImplementedError()
2456 def _get_default_schema_name(self, connection):
2457 return connection.execute("SELECT DATABASE()").scalar()
2459 def has_table(self, connection, table_name, schema=None):
2460 # SHOW TABLE STATUS LIKE and SHOW TABLES LIKE do not function properly
2461 # on macosx (and maybe win?) with multibyte table names.
2462 #
2463 # TODO: if this is not a problem on win, make the strategy swappable
2464 # based on platform. DESCRIBE is slower.
2466 # [ticket:726]
2467 # full_name = self.identifier_preparer.format_table(table,
2468 # use_schema=True)
2470 full_name = ".".join(
2471 self.identifier_preparer._quote_free_identifiers(
2472 schema, table_name
2473 )
2474 )
2476 st = "DESCRIBE %s" % full_name
2477 rs = None
2478 try:
2479 try:
2480 rs = connection.execution_options(
2481 skip_user_error_events=True
2482 ).execute(st)
2483 have = rs.fetchone() is not None
2484 rs.close()
2485 return have
2486 except exc.DBAPIError as e:
2487 if self._extract_error_code(e.orig) == 1146:
2488 return False
2489 raise
2490 finally:
2491 if rs:
2492 rs.close()
2494 def initialize(self, connection):
2495 self._connection_charset = self._detect_charset(connection)
2496 self._detect_sql_mode(connection)
2497 self._detect_ansiquotes(connection)
2498 self._detect_casing(connection)
2499 if self._server_ansiquotes:
2500 # if ansiquotes == True, build a new IdentifierPreparer
2501 # with the new setting
2502 self.identifier_preparer = self.preparer(
2503 self, server_ansiquotes=self._server_ansiquotes
2504 )
2506 default.DefaultDialect.initialize(self, connection)
2508 self.supports_for_update_of = (
2509 self._is_mysql and self.server_version_info >= (8,)
2510 )
2512 self._needs_correct_for_88718_96365 = (
2513 not self._is_mariadb and self.server_version_info >= (8,)
2514 )
2516 self._warn_for_known_db_issues()
2518 def _warn_for_known_db_issues(self):
2519 if self._is_mariadb:
2520 mdb_version = self._mariadb_normalized_version_info
2521 if mdb_version > (10, 2) and mdb_version < (10, 2, 9):
2522 util.warn(
2523 "MariaDB %r before 10.2.9 has known issues regarding "
2524 "CHECK constraints, which impact handling of NULL values "
2525 "with SQLAlchemy's boolean datatype (MDEV-13596). An "
2526 "additional issue prevents proper migrations of columns "
2527 "with CHECK constraints (MDEV-11114). Please upgrade to "
2528 "MariaDB 10.2.9 or greater, or use the MariaDB 10.1 "
2529 "series, to avoid these issues." % (mdb_version,)
2530 )
2532 @property
2533 def _is_mariadb(self):
2534 return (
2535 self.server_version_info and "MariaDB" in self.server_version_info
2536 )
2538 @property
2539 def _is_mysql(self):
2540 return not self._is_mariadb
2542 @property
2543 def _is_mariadb_102(self):
2544 return self._is_mariadb and self._mariadb_normalized_version_info > (
2545 10,
2546 2,
2547 )
2549 @property
2550 def _mariadb_normalized_version_info(self):
2551 # MariaDB's wire-protocol prepends the server_version with
2552 # the string "5.5"; now that we use @@version we no longer see this.
2554 if self._is_mariadb:
2555 idx = self.server_version_info.index("MariaDB")
2556 return self.server_version_info[idx - 3 : idx]
2557 else:
2558 return self.server_version_info
2560 @property
2561 def _supports_cast(self):
2562 return (
2563 self.server_version_info is None
2564 or self.server_version_info >= (4, 0, 2)
2565 )
2567 @reflection.cache
2568 def get_schema_names(self, connection, **kw):
2569 rp = connection.execute("SHOW schemas")
2570 return [r[0] for r in rp]
2572 @reflection.cache
2573 def get_table_names(self, connection, schema=None, **kw):
2574 """Return a Unicode SHOW TABLES from a given schema."""
2575 if schema is not None:
2576 current_schema = schema
2577 else:
2578 current_schema = self.default_schema_name
2580 charset = self._connection_charset
2581 if self.server_version_info < (5, 0, 2):
2582 rp = connection.execute(
2583 "SHOW TABLES FROM %s"
2584 % self.identifier_preparer.quote_identifier(current_schema)
2585 )
2586 return [
2587 row[0] for row in self._compat_fetchall(rp, charset=charset)
2588 ]
2589 else:
2590 rp = connection.execute(
2591 "SHOW FULL TABLES FROM %s"
2592 % self.identifier_preparer.quote_identifier(current_schema)
2593 )
2595 return [
2596 row[0]
2597 for row in self._compat_fetchall(rp, charset=charset)
2598 if row[1] == "BASE TABLE"
2599 ]
2601 @reflection.cache
2602 def get_view_names(self, connection, schema=None, **kw):
2603 if self.server_version_info < (5, 0, 2):
2604 raise NotImplementedError
2605 if schema is None:
2606 schema = self.default_schema_name
2607 if self.server_version_info < (5, 0, 2):
2608 return self.get_table_names(connection, schema)
2609 charset = self._connection_charset
2610 rp = connection.execute(
2611 "SHOW FULL TABLES FROM %s"
2612 % self.identifier_preparer.quote_identifier(schema)
2613 )
2614 return [
2615 row[0]
2616 for row in self._compat_fetchall(rp, charset=charset)
2617 if row[1] in ("VIEW", "SYSTEM VIEW")
2618 ]
2620 @reflection.cache
2621 def get_table_options(self, connection, table_name, schema=None, **kw):
2623 parsed_state = self._parsed_state_or_create(
2624 connection, table_name, schema, **kw
2625 )
2626 return parsed_state.table_options
2628 @reflection.cache
2629 def get_columns(self, connection, table_name, schema=None, **kw):
2630 parsed_state = self._parsed_state_or_create(
2631 connection, table_name, schema, **kw
2632 )
2633 return parsed_state.columns
2635 @reflection.cache
2636 def get_pk_constraint(self, connection, table_name, schema=None, **kw):
2637 parsed_state = self._parsed_state_or_create(
2638 connection, table_name, schema, **kw
2639 )
2640 for key in parsed_state.keys:
2641 if key["type"] == "PRIMARY":
2642 # There can be only one.
2643 cols = [s[0] for s in key["columns"]]
2644 return {"constrained_columns": cols, "name": None}
2645 return {"constrained_columns": [], "name": None}
2647 @reflection.cache
2648 def get_foreign_keys(self, connection, table_name, schema=None, **kw):
2650 parsed_state = self._parsed_state_or_create(
2651 connection, table_name, schema, **kw
2652 )
2653 default_schema = None
2655 fkeys = []
2657 for spec in parsed_state.fk_constraints:
2658 ref_name = spec["table"][-1]
2659 ref_schema = len(spec["table"]) > 1 and spec["table"][-2] or schema
2661 if not ref_schema:
2662 if default_schema is None:
2663 default_schema = connection.dialect.default_schema_name
2664 if schema == default_schema:
2665 ref_schema = schema
2667 loc_names = spec["local"]
2668 ref_names = spec["foreign"]
2670 con_kw = {}
2671 for opt in ("onupdate", "ondelete"):
2672 if spec.get(opt, False):
2673 con_kw[opt] = spec[opt]
2675 fkey_d = {
2676 "name": spec["name"],
2677 "constrained_columns": loc_names,
2678 "referred_schema": ref_schema,
2679 "referred_table": ref_name,
2680 "referred_columns": ref_names,
2681 "options": con_kw,
2682 }
2683 fkeys.append(fkey_d)
2685 if self._needs_correct_for_88718_96365:
2686 self._correct_for_mysql_bugs_88718_96365(fkeys, connection)
2688 return fkeys
2690 def _correct_for_mysql_bugs_88718_96365(self, fkeys, connection):
2691 # Foreign key is always in lower case (MySQL 8.0)
2692 # https://bugs.mysql.com/bug.php?id=88718
2693 # issue #4344 for SQLAlchemy
2695 # table name also for MySQL 8.0
2696 # https://bugs.mysql.com/bug.php?id=96365
2697 # issue #4751 for SQLAlchemy
2699 # for lower_case_table_names=2, information_schema.columns
2700 # preserves the original table/schema casing, but SHOW CREATE
2701 # TABLE does not. this problem is not in lower_case_table_names=1,
2702 # but use case-insensitive matching for these two modes in any case.
2704 if self._casing in (1, 2):
2706 def lower(s):
2707 return s.lower()
2709 else:
2710 # if on case sensitive, there can be two tables referenced
2711 # with the same name different casing, so we need to use
2712 # case-sensitive matching.
2713 def lower(s):
2714 return s
2716 default_schema_name = connection.dialect.default_schema_name
2717 col_tuples = [
2718 (
2719 lower(rec["referred_schema"] or default_schema_name),
2720 lower(rec["referred_table"]),
2721 col_name,
2722 )
2723 for rec in fkeys
2724 for col_name in rec["referred_columns"]
2725 ]
2727 if col_tuples:
2729 correct_for_wrong_fk_case = connection.execute(
2730 sql.text(
2731 """
2732 select table_schema, table_name, column_name
2733 from information_schema.columns
2734 where (table_schema, table_name, lower(column_name)) in
2735 :table_data;
2736 """
2737 ).bindparams(sql.bindparam("table_data", expanding=True)),
2738 table_data=col_tuples,
2739 )
2741 # in casing=0, table name and schema name come back in their
2742 # exact case.
2743 # in casing=1, table name and schema name come back in lower
2744 # case.
2745 # in casing=2, table name and schema name come back from the
2746 # information_schema.columns view in the case
2747 # that was used in CREATE DATABASE and CREATE TABLE, but
2748 # SHOW CREATE TABLE converts them to *lower case*, therefore
2749 # not matching. So for this case, case-insensitive lookup
2750 # is necessary
2751 d = defaultdict(dict)
2752 for schema, tname, cname in correct_for_wrong_fk_case:
2753 d[(lower(schema), lower(tname))]["SCHEMANAME"] = schema
2754 d[(lower(schema), lower(tname))]["TABLENAME"] = tname
2755 d[(lower(schema), lower(tname))][cname.lower()] = cname
2757 for fkey in fkeys:
2758 rec = d[
2759 (
2760 lower(fkey["referred_schema"] or default_schema_name),
2761 lower(fkey["referred_table"]),
2762 )
2763 ]
2765 fkey["referred_table"] = rec["TABLENAME"]
2766 if fkey["referred_schema"] is not None:
2767 fkey["referred_schema"] = rec["SCHEMANAME"]
2769 fkey["referred_columns"] = [
2770 rec[col.lower()] for col in fkey["referred_columns"]
2771 ]
2773 @reflection.cache
2774 def get_check_constraints(self, connection, table_name, schema=None, **kw):
2775 parsed_state = self._parsed_state_or_create(
2776 connection, table_name, schema, **kw
2777 )
2779 return [
2780 {"name": spec["name"], "sqltext": spec["sqltext"]}
2781 for spec in parsed_state.ck_constraints
2782 ]
2784 @reflection.cache
2785 def get_table_comment(self, connection, table_name, schema=None, **kw):
2786 parsed_state = self._parsed_state_or_create(
2787 connection, table_name, schema, **kw
2788 )
2789 return {"text": parsed_state.table_options.get("mysql_comment", None)}
2791 @reflection.cache
2792 def get_indexes(self, connection, table_name, schema=None, **kw):
2794 parsed_state = self._parsed_state_or_create(
2795 connection, table_name, schema, **kw
2796 )
2798 indexes = []
2800 for spec in parsed_state.keys:
2801 dialect_options = {}
2802 unique = False
2803 flavor = spec["type"]
2804 if flavor == "PRIMARY":
2805 continue
2806 if flavor == "UNIQUE":
2807 unique = True
2808 elif flavor in ("FULLTEXT", "SPATIAL"):
2809 dialect_options["mysql_prefix"] = flavor
2810 elif flavor is None:
2811 pass
2812 else:
2813 self.logger.info(
2814 "Converting unknown KEY type %s to a plain KEY", flavor
2815 )
2816 pass
2818 if spec["parser"]:
2819 dialect_options["mysql_with_parser"] = spec["parser"]
2821 index_d = {}
2822 if dialect_options:
2823 index_d["dialect_options"] = dialect_options
2825 index_d["name"] = spec["name"]
2826 index_d["column_names"] = [s[0] for s in spec["columns"]]
2827 index_d["unique"] = unique
2828 if flavor:
2829 index_d["type"] = flavor
2830 indexes.append(index_d)
2831 return indexes
2833 @reflection.cache
2834 def get_unique_constraints(
2835 self, connection, table_name, schema=None, **kw
2836 ):
2837 parsed_state = self._parsed_state_or_create(
2838 connection, table_name, schema, **kw
2839 )
2841 return [
2842 {
2843 "name": key["name"],
2844 "column_names": [col[0] for col in key["columns"]],
2845 "duplicates_index": key["name"],
2846 }
2847 for key in parsed_state.keys
2848 if key["type"] == "UNIQUE"
2849 ]
2851 @reflection.cache
2852 def get_view_definition(self, connection, view_name, schema=None, **kw):
2854 charset = self._connection_charset
2855 full_name = ".".join(
2856 self.identifier_preparer._quote_free_identifiers(schema, view_name)
2857 )
2858 sql = self._show_create_table(
2859 connection, None, charset, full_name=full_name
2860 )
2861 return sql
2863 def _parsed_state_or_create(
2864 self, connection, table_name, schema=None, **kw
2865 ):
2866 return self._setup_parser(
2867 connection,
2868 table_name,
2869 schema,
2870 info_cache=kw.get("info_cache", None),
2871 )
2873 @util.memoized_property
2874 def _tabledef_parser(self):
2875 """return the MySQLTableDefinitionParser, generate if needed.
2877 The deferred creation ensures that the dialect has
2878 retrieved server version information first.
2880 """
2881 if self.server_version_info < (4, 1) and self._server_ansiquotes:
2882 # ANSI_QUOTES doesn't affect SHOW CREATE TABLE on < 4.1
2883 preparer = self.preparer(self, server_ansiquotes=False)
2884 else:
2885 preparer = self.identifier_preparer
2886 return _reflection.MySQLTableDefinitionParser(self, preparer)
2888 @reflection.cache
2889 def _setup_parser(self, connection, table_name, schema=None, **kw):
2890 charset = self._connection_charset
2891 parser = self._tabledef_parser
2892 full_name = ".".join(
2893 self.identifier_preparer._quote_free_identifiers(
2894 schema, table_name
2895 )
2896 )
2897 sql = self._show_create_table(
2898 connection, None, charset, full_name=full_name
2899 )
2900 if re.match(r"^CREATE (?:ALGORITHM)?.* VIEW", sql):
2901 # Adapt views to something table-like.
2902 columns = self._describe_table(
2903 connection, None, charset, full_name=full_name
2904 )
2905 sql = parser._describe_to_create(table_name, columns)
2906 return parser.parse(sql, charset)
2908 def _detect_charset(self, connection):
2909 raise NotImplementedError()
2911 def _detect_casing(self, connection):
2912 """Sniff out identifier case sensitivity.
2914 Cached per-connection. This value can not change without a server
2915 restart.
2917 """
2918 # http://dev.mysql.com/doc/refman/5.0/en/name-case-sensitivity.html
2920 charset = self._connection_charset
2921 row = self._compat_first(
2922 connection.execute("SHOW VARIABLES LIKE 'lower_case_table_names'"),
2923 charset=charset,
2924 )
2925 if not row:
2926 cs = 0
2927 else:
2928 # 4.0.15 returns OFF or ON according to [ticket:489]
2929 # 3.23 doesn't, 4.0.27 doesn't..
2930 if row[1] == "OFF":
2931 cs = 0
2932 elif row[1] == "ON":
2933 cs = 1
2934 else:
2935 cs = int(row[1])
2936 self._casing = cs
2937 return cs
2939 def _detect_collations(self, connection):
2940 """Pull the active COLLATIONS list from the server.
2942 Cached per-connection.
2943 """
2945 collations = {}
2946 if self.server_version_info < (4, 1, 0):
2947 pass
2948 else:
2949 charset = self._connection_charset
2950 rs = connection.execute("SHOW COLLATION")
2951 for row in self._compat_fetchall(rs, charset):
2952 collations[row[0]] = row[1]
2953 return collations
2955 def _detect_sql_mode(self, connection):
2956 row = self._compat_first(
2957 connection.execute("SHOW VARIABLES LIKE 'sql_mode'"),
2958 charset=self._connection_charset,
2959 )
2961 if not row:
2962 util.warn(
2963 "Could not retrieve SQL_MODE; please ensure the "
2964 "MySQL user has permissions to SHOW VARIABLES"
2965 )
2966 self._sql_mode = ""
2967 else:
2968 self._sql_mode = row[1] or ""
2970 def _detect_ansiquotes(self, connection):
2971 """Detect and adjust for the ANSI_QUOTES sql mode."""
2973 mode = self._sql_mode
2974 if not mode:
2975 mode = ""
2976 elif mode.isdigit():
2977 mode_no = int(mode)
2978 mode = (mode_no | 4 == mode_no) and "ANSI_QUOTES" or ""
2980 self._server_ansiquotes = "ANSI_QUOTES" in mode
2982 # as of MySQL 5.0.1
2983 self._backslash_escapes = "NO_BACKSLASH_ESCAPES" not in mode
2985 def _show_create_table(
2986 self, connection, table, charset=None, full_name=None
2987 ):
2988 """Run SHOW CREATE TABLE for a ``Table``."""
2990 if full_name is None:
2991 full_name = self.identifier_preparer.format_table(table)
2992 st = "SHOW CREATE TABLE %s" % full_name
2994 rp = None
2995 try:
2996 rp = connection.execution_options(
2997 skip_user_error_events=True
2998 ).execute(st)
2999 except exc.DBAPIError as e:
3000 if self._extract_error_code(e.orig) == 1146:
3001 util.raise_(exc.NoSuchTableError(full_name), replace_context=e)
3002 else:
3003 raise
3004 row = self._compat_first(rp, charset=charset)
3005 if not row:
3006 raise exc.NoSuchTableError(full_name)
3007 return row[1].strip()
3009 return sql
3011 def _describe_table(self, connection, table, charset=None, full_name=None):
3012 """Run DESCRIBE for a ``Table`` and return processed rows."""
3014 if full_name is None:
3015 full_name = self.identifier_preparer.format_table(table)
3016 st = "DESCRIBE %s" % full_name
3018 rp, rows = None, None
3019 try:
3020 try:
3021 rp = connection.execution_options(
3022 skip_user_error_events=True
3023 ).execute(st)
3024 except exc.DBAPIError as e:
3025 code = self._extract_error_code(e.orig)
3026 if code == 1146:
3027 util.raise_(
3028 exc.NoSuchTableError(full_name), replace_context=e
3029 )
3030 elif code == 1356:
3031 util.raise_(
3032 exc.UnreflectableTableError(
3033 "Table or view named %s could not be "
3034 "reflected: %s" % (full_name, e)
3035 ),
3036 replace_context=e,
3037 )
3038 else:
3039 raise
3040 rows = self._compat_fetchall(rp, charset=charset)
3041 finally:
3042 if rp:
3043 rp.close()
3044 return rows
3047class _DecodingRowProxy(object):
3048 """Return unicode-decoded values based on type inspection.
3050 Smooth over data type issues (esp. with alpha driver versions) and
3051 normalize strings as Unicode regardless of user-configured driver
3052 encoding settings.
3054 """
3056 # Some MySQL-python versions can return some columns as
3057 # sets.Set(['value']) (seriously) but thankfully that doesn't
3058 # seem to come up in DDL queries.
3060 _encoding_compat = {
3061 "koi8r": "koi8_r",
3062 "koi8u": "koi8_u",
3063 "utf16": "utf-16-be", # MySQL's uft16 is always bigendian
3064 "utf8mb4": "utf8", # real utf8
3065 "eucjpms": "ujis",
3066 }
3068 def __init__(self, rowproxy, charset):
3069 self.rowproxy = rowproxy
3070 self.charset = self._encoding_compat.get(charset, charset)
3072 def __getitem__(self, index):
3073 item = self.rowproxy[index]
3074 if isinstance(item, _array):
3075 item = item.tostring()
3077 if self.charset and isinstance(item, util.binary_type):
3078 return item.decode(self.charset)
3079 else:
3080 return item
3082 def __getattr__(self, attr):
3083 item = getattr(self.rowproxy, attr)
3084 if isinstance(item, _array):
3085 item = item.tostring()
3086 if self.charset and isinstance(item, util.binary_type):
3087 return item.decode(self.charset)
3088 else:
3089 return item