Happy Thanksgiving

November 24, 2010 at 08:34 PM | Code, SQLAlchemy

Motivations:

  1. Went through all that trouble to rebuild my blog, still nothing on my mind. Let's make something up !
  2. Holiday cheer ! Since I am nothing if not cheery.

The exercise below uses a database to make a turkey. It does so via an entirely convoluted and pointless series of steps, persisting and restoring a needlessly esoteric set of data with a SQLite database through a series of obnoxious arithmetical translations, however delicious the final answer may be.

It does, at least, illustrate some query/domain model techniques I've been using on the job as of late. The @classmethod constructs you see below are a slimmed down version of SQLAlchemy's "hybrid attrbutes" example, attribute helpers that are going to play a much more prominent role in 0.7, where we begin to de-emphasize synonym and comparable_property in favor of the hybrid, a simpler and more versatile component.

In their full form, these components allow domain classes to define transformational expressions that work both as SQL expressions as well as instance attributes. Below, a more succinct version of them is used to associate composed SQL elements with a domain model. Ramped up to scale in a real application, I'm able to generate large paginated displays, containing a wide range of values mathemetically derived from others, all evaluated in SQL and pulled straight from a single Query object composed using this technique. In other contexts, when I load individual objects from the database, those same derivation methods are available to me at the instance level, pulling their components from the in-memory state of the object and interpreted by Python, instead of by the relational engine.

Surmounting the barrier of absurdity inherent in this exercise did yield a small bounty - writing this example uncovered some surprisingly glaring expression language bugs I have to fix (see all those self_group() calls ? Those shouldn't have to be there...ouch).

Hope you enjoy it:

packed_turkey = \
    '789c2d954b121d350c45e7bd12129cc292fc4d522936c18c82193b604465ef9cd'\
    '3c9c0cf7eb66c5d5d5da97f8fbbeaf92f67f59655a3e5aef1fc737a3d1f224f0b'\
    'f6479b6d8cdb7271bc66bb31db1ad9ee1e8d65ac16bb05b6b7c560ea839dd89cc'\
    'c6ca3df36a2b77d3783d339a2ade3cbdcadd576659bc73792516d7b7b3b3676d9'\
    '3281509dc11ceb0eb0700284d5d66e031fc969f6c42200e573b719cb8976f8054'\
    '84436362fe72c165ef01c9793ee58c44e8c9c8d5b0c7c70561b87f88a7d5f7280'\
    '36f143e431011dc9d50444dde7978c013917ec153c9f9b832452e8e5a717ec754'\
    '19d36f56aa0ebc7e2e56a7b1707975b93abfbdee7d70941333ba82f335948ff6b'\
    'd192d06092d90103109887572476810d7e92c7a2f2c5407050c5336413a070b40'\
    'f4e00631c84961054c3b908b8782a481a6100182d7c3cd03bcd741a6719d33577'\
    '6c5ed78dc82fe9004b60c633cca70db01637a05a80984be59598b1fbf349ae025'\
    '905c90caeafebbd002ab16019a41f7750898858c5a807c38edbf75160ae838a38'\
    'e27692b0816986ec1073cce737f83c686020a6970a00aa40c2ac05ec3e4ca98c9'\
    '3ec7d84a2e3fe728161caf29ccf9f5b2590f64b8812cb400e5b46bbd7e18767e6'\
    '3095f82811e31333c20e44380f1eae95b25e9f2ad310b1bc3243e91c221be7150'\
    '37a57a66c6e4b0fe1a422721fc9148a295298aa7fd6f3174f3732a828246733b8'\
    'd94e333b4d3ec404e4a680140a4e9a9515adac1cd2d4ac01cbc18a0ca4d6a6b01'\
    'a4c923db89eaab781114dc323f88804cee7f14cf88d150bb95c78219bd4123a47'\
    '2b446ef977a084778544c2811b2009ff8322c0f053820c1b8ae897954282c3def'\
    '0d6cdee926bca1c64cd760321d42a04b367b28ff93004b5627fb20ab77965a85e'\
    'd672516a4c59db07f84f852f756a7ad42d712fa5b04c026adc47c05d0acd95b35'\
    'cc2a44da89728589081a23a2cc142c6de2ceb0197c7466198dd36658b287bd6c6'\
    '92da2ceaaa08aa08ea22ae0be86b8de1e2027ef07fe07e0274582deebdd5b6189'\
    'ca56bc99b324c479b0e8c2c0a22727fc0ceb8969a33ad60cce7df89b80748969a'\
    'b5d10de7f1fcc167219f2f748a7c3e23bafb7ca7cdcfe7eb78054a72415cf3f93'\
    'b793951482e05e44045047ebb45884cd2a08745e3b7810312316d406f43826493'\
    '07dcb4dd2f8585711ebf387e6ac20cd073cdc09b67aba028e2e1f48ab72014e4c'\
    '7ef001b9674d91615e5b505e078beada0f9541a2e0a29081b641ace6dc75da5d1'\
    '80f17171211f775a25169c7dc9a806e5908af25a667ce4545a7443b42ba95a375'\
    'e69aa63bbaa969ad9d88491e3f9365e7cf5fc0f2961791f'

from sqlalchemy import create_engine, Integer, Float, \
                        CHAR, Column, ForeignKey, \
                        func, cast, case, and_
from sqlalchemy.orm import Session
from sqlalchemy.ext.declarative import declarative_base
import math, binascii, zlib

img = dict(
    (line[0], [int(x) for x in line[1:].split(",")])
    for line in zlib.decompress(binascii.unhexlify(packed_turkey)).\
    split("\n") if line)

class classproperty(property):
    """Class level @property."""
    def __get__(desc, self, cls):
        return desc.fget(cls)

Base = declarative_base()

class SpiralPoint(Base):
    """Store a character and its position along a spiral."""

    __tablename__ = 'spiral_point'

    def __init__(self, character, value):
        self.value = value
        self.character = character
        self.sqrt = math.sqrt(value)

    value = Column(Integer, primary_key=True)
    """The value."""

    character = Column(CHAR(1))
    """The character."""

    sqrt = Column(Float, nullable=False)
    """Store the square root of the value, SQLite
    doesn't have sqrt() built in.

    (custom SQLite functions are beyond the scope
    of this "exercise", as it were)
    """

    @classproperty
    def nearest_odd(cls):
        """Return the nearest odd number below this SpiralPoint's sqrt."""

        return (func.round(cls.sqrt / 2) * 2 - 1).\
                    self_group().label('nearest_odd')

    @classproperty
    def nearest_even(cls):
        """Return the nearest even number below this SpiralPoint's sqrt."""

        return (cls.nearest_odd + 1).\
                    self_group().label('nearest_even')

    @classproperty
    def center_distance(cls):
        """How far from the 'center' is this value ?"""

        return (cls.nearest_even / 2).label('center_distance')

    @classproperty
    def quadrant(cls):
        """Which side of the 'center' is this value part of ?"""

        return (cls.value - (
                cls.nearest_odd * cls.nearest_odd
            )).label('quadrant')

e = create_engine('sqlite://', echo=True)
Base.metadata.create_all(e)

session = Session(e)
session.add_all([SpiralPoint(char, val)
                    for char in img
                    for val in img[char]])
session.commit()

total_x = 60
total_y = 24

# load our turkey !
turkey = session.query(
    SpiralPoint.character,
    cast(case([
        (
            SpiralPoint.quadrant < SpiralPoint.nearest_even,
            (total_x / 2) + SpiralPoint.center_distance
        ),
        (
            and_(
                SpiralPoint.nearest_even <= SpiralPoint.quadrant,
                SpiralPoint.quadrant < SpiralPoint.nearest_even * 2
            ),
            (
                (total_x / 2) + SpiralPoint.center_distance - 1) -
                (SpiralPoint.quadrant - SpiralPoint.nearest_even).self_group()
            ),
        (
            and_(
                SpiralPoint.nearest_even * 2 <= SpiralPoint.quadrant,
                SpiralPoint.quadrant < SpiralPoint.nearest_even *3
            ),
            (total_x / 2) - SpiralPoint.center_distance
        )
    ],
        else_ = (
                    (total_x / 2) - SpiralPoint.center_distance + 1) +
                    (SpiralPoint.quadrant - (SpiralPoint.nearest_even * 3)
                )
    ), Integer).label('x'),

    cast(case([
        (
            SpiralPoint.quadrant < SpiralPoint.nearest_even,
            (total_y / 2) - SpiralPoint.center_distance + 1 + SpiralPoint.quadrant
        ),
        (
            and_(
                SpiralPoint.nearest_even <= SpiralPoint.quadrant,
                SpiralPoint.quadrant < SpiralPoint.nearest_even * 2
            ),
            (total_y / 2) + SpiralPoint.center_distance),
        (
            and_(
                SpiralPoint.nearest_even * 2 <= SpiralPoint.quadrant,
                SpiralPoint.quadrant < SpiralPoint.nearest_even *3
            ),
            (total_y / 2) + SpiralPoint.center_distance - 1 -
            (SpiralPoint.quadrant - (SpiralPoint.nearest_even * 2)).self_group()
        )
    ],
        else_ = (total_y / 2) - SpiralPoint.center_distance
    ), Integer).label('y')
)

# serve our turkey !
grid = [
        [' ' for x in xrange(total_x)]
        for y in xrange(total_y)
]

for char, x, y in turkey:
    grid[y][x] = char

for g in grid:
    print "".join([c for c in g])

Download Source


SQLAlchemy 0.6 ....Getting Warmer

October 12, 2009 at 11:07 AM | Code, SQLAlchemy

It's super-hard to find good large blocks of time to work on SQLAlchemy at the moment...but I did manage to get most of the major "whats new?" bits up on the wiki. I'm really close to pulling the trigger on a beta release, and we already have people running trunk. There's just a certain presence of mind I like to have before releasing that hasn't been clicking on weekends.

Read about the big new items and what to expect when upgrading, at http://www.sqlalchemy.org/trac/wiki/06Migration.


SQLAlchemy 0.5.4p1 Recommended for All Ages

May 18, 2009 at 12:37 PM | Code, SQLAlchemy

I don't usually blog about releases, especially point releases, but this one is pretty significant in that we've repaired some very severe speed bumps that were impacting the flush() process. Anyone working with large numbers of objects that has observed the Session to slow down as it gets bigger should download this release, as that issue has been resolved. Other latencies within flush() have also been flattened, and a few spreadsheet jobs that I run here which were taking 20-30 minutes now complete in about five.

It's still not nearly as fast as running a single huge executemany() to insert lots of rows, but if your experience with this release is like mine you should see much faster runs for large data update operations. As usual there's dozens of other fixes and enhancements too.

Work on the 0.6 series continues, as this release is focused on expanding the world of compatibility for SQLAlchemy, including compatibility with Python 3K, Jython, and many more DBAPI implementations. It also refactors DDL generation to work within the same compiler framework as that of non-DDL expressions, so you can easily create and execute CreateTable kinds of objects. Database reflection has been greatly enhanced with a new Inspector API too.


SQLAlchemy - Breaking the 80% Barrier Since Day One

January 10, 2009 at 04:56 PM | Code, SQLAlchemy

As the 0.5 release of SQLAlchemy has now become available, here's a little retrospective on the "relational" nature of SQLAlchemy. SQLAlchemy's equal treatment of any relation and its deep transformational abilities of such are the core value that makes us highly unique within the database access/ORM field. It's the key to our 80% Busting power which allow an application architected around SQLAlchemy to smoothly co-evolve with an ever-more complex set of queries and schemas.

To illustrate this, we'll walk through an 0.5 feature that draws upon the three years of effort that's gone into this capability.

As is typical, we start in an entirely boring way:

from sqlalchemy import Column, Integer, Unicode, create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class Foo(Base):
    __tablename__ = 'foo'
    id = Column(Integer, primary_key=True)
    data = Column(Unicode)

    def __repr__(self):
        return "Foo(%r)" % self.data

engine = create_engine('sqlite:///:memory:', echo=True)
Base.metadata.create_all(engine)

session = sessionmaker(engine)()

To those unfamiliar with SQLA, the above example is specific to the ORM portion of SQLAlchemy (as opposed to the SQL expression language, an independent library upon which the ORM builds). It consists of the requisite imports, a declarative_base() class which offers us an easy platform with which to construct database-enabled classes, a Foo mapped class with some pretty generic columns, and the specification of a datasource, CREATE TABLE statements as needed, and an ORM session to pull it together. Everything above is detailed in the Object Relational Tutorial.

The data we'll start with is five objects with predictable data values:

session.add_all([
    Foo(data=u'f1'),
    Foo(data=u'f2'),
    Foo(data=u'f3'),
    Foo(data=u'f4'),
    Foo(data=u'f5'),
])

session.commit()

In 0.5, we can now query individual columns at the ORM level. So starting with a query like this:

query = session.query(Foo.id, Foo.data)

We can receive the results of this query using all() (we'll move to doctest format where the output can be viewed):

>>> print query.all()
SELECT foo.id AS foo_id, foo.data AS foo_data
FROM foo
[]
[(1, u'f1'), (2, u'f2'), (3, u'f3'), (4, u'f4'), (5, u'f5')]

The individual column query feature is nice - but still very boring ! It's nothing you can't do with any other tool, and SQLA actually lagged behind a bit in offering this capability in a straightforward way at the ORM level, which is partially because it was always possible with the SQL expression language part of SQLAlchemy, and partially because our Query has a broad usage contract that took a while to adapt to this model. Water under the bridge....

Things remain patently boring as we decide to limit the results to just the first three rows:

>>> print query.limit(3).all()
SELECT foo.id AS foo_id, foo.data AS foo_data
FROM foo
LIMIT 3 OFFSET 0
[]
[(1, u'f1'), (2, u'f2'), (3, u'f3')]

Did I see something flicker in the corner ? Not really, we're just adding a descending order by so that we get the last three rows instead. Yaawwwnnnnn:

>>> print query.order_by(Foo.data.desc()).limit(3).all()
SELECT foo.id AS foo_id, foo.data AS foo_data
FROM foo ORDER BY foo.data DESC
LIMIT 3 OFFSET 0
[]
[(5, u'f5'), (4, u'f4'), (3, u'f3')]

Barely staying awake, we'd like to sort the above rows in name ascending order so that we get ['f3', 'f4', 'f5'] instead. To do this, it's, errr, umm.......

Eighty Percent Time !!!

What just happened ? Proceeding through an entirely boring series of modifications to our query, we've suddenly hit something that is not entirely obvious. This is where any afternoon-coded SQL tool falls off, because to limit by the last three rows, and then order the limited results in reverse, requires a subquery. Not just a scalar subquery like WHERE x=(SELECT y FROM table) either - a subquery that acts the same way as we've been treating our table - a "selectable" which delivers rows that correspond to our Foo class, to which we can then apply an ascending ORDER BY.

Let's be fair. You could get the results you want using a WHERE subquery, such as in conjunction with IN. "SELECT * FROM foo WHERE id IN (SELECT id FROM foo ORDER BY data DESC LIMIT 3) ORDER BY data" would do it. There's other ways too like EXISTS, or maybe issuing a JOIN to the subquery (another tall order for some tools). However, let me respectfully say that this is lame. You're rearranging your SQL and introducing potentially reduced query optimization because your tool won't let you do the most obvious thing (or more importantly, exactly what you want to do). This is a typical 80% boundary. You pull this one last Jenga stick out and the whole thing collapses, as your tool no longer supports the natural progression of expression construction that direct SQL offers you. You need to drop into raw SQL or you need to restructure your whole query to work around the tool's limitations.

Before we continue, here's a pop quiz. What would be the expected behavior of the following:

query = session.query(Foo.id, Foo.data)
query = query.order_by(Foo.data.desc()).limit(3)
query = query.order_by(Foo.data)
print query.all()

Where above, we order by data descending, then LIMIT 3, then order by data ascending. Does it:

  1. Issue ORDER BY data DESC, data and then issue the LIMIT ?
  2. Issue ORDER BY data DESC LIMIT 3, and then ORDER BY on a subquery of the preceding statement ?

As it turns out, the answer to this question is subjective. Depending on the perspective one comes from, we've observed based on talking to our community that some expect "a" and some expect "b". So in refusing to guess, here's what it does:

>>> query = session.query(Foo.id, Foo.data)
>>> query = query.order_by(Foo.data.desc()).limit(3)
>>> query = query.order_by(Foo.data)
Traceback (most recent call last):
    ...
sqlalchemy.exc.InvalidRequestError: Query.order_by() being
    called on a Query which already has LIMIT or OFFSET applied.
    To modify the row-limited results of a Query, call
    from_self() first. Otherwise, call order_by() before limit()
    or offset() are applied.

We went with "please tell us which answer you'd like". Specifying from_self() means "yes, we really want to wrap the whole thing in a subquery before continuing":

>>> query = query.from_self().order_by(Foo.data)
>>> print query.all()
SELECT anon_1.foo_id AS anon_1_foo_id, anon_1.foo_data AS anon_1_foo_data
FROM (SELECT foo.id AS foo_id, foo.data AS foo_data
FROM foo ORDER BY foo.data DESC
 LIMIT 3 OFFSET 0) AS anon_1 ORDER BY anon_1.foo_data
[]
[(3, u'f3'), (4, u'f4'), (5, u'f5')]

The mechanism by which SQLAlchemy uses to wrap tables in subqueries but consistently target the columns back to our mapped columns and entities is called Column Correspondence - this is something I described in detail in this blog post. I'm not familiar with any formalized system that describes column correspondence from a relational standpoint, but if someone out there is, I'd appreciate the education. I'm not at all a formalist and I've built this whole thing in my garage.

We still haven't hit the "retrospective" and/or "day one" part of the story yet. from_self() is just one of many ways to get at SQLAlchemy's "relational" guts, the big "under the hood" feature that's taken (and continues to take) a long time to get right. In the old days these guts were exposed in the ORM in extremely limited ways. As we've progressed, we're able to allow more generalized access to it, such as via from_self(). Let's illustrate another from_self() example that can link back to one of SQLA's original "80% busters".

Like all SQLAlchemy examples, we add a second class Bar, relate it to Foo, and add some more data:

from sqlalchemy import ForeignKey
from sqlalchemy.orm import relation, backref

class Bar(Base):
    __tablename__ = 'bar'
    id = Column(Integer, primary_key=True)
    data = Column(Unicode)
    foo_id = Column(Integer, ForeignKey('foo.id'))
    foo = relation(Foo, backref=backref('bars', collection_class=set))

    def __repr__(self):
        return "Bar(%r)" % self.data

Base.metadata.create_all(engine)

f3 = session.query(Foo).filter(Foo.data==u'f3').one()
f5 = session.query(Foo).filter(Foo.data==u'f5').one()

session.add_all([
    Bar(data=u'b1', foo=f3),
    Bar(data=u'b2', foo=f5),
    Bar(data=u'b3', foo=f5),
    Bar(data=u'b4', foo=f5),
    ])
session.commit()

We've added four Bar objects, each of which references a Foo object via many-to-one. The corresponding Foo object references each Bar via a one-to-many collection. For background, this is also ORM Tutorial stuff.

Let's now do the obvious thing of selecting all the data at once. Just for fun we'll do the query like this (yes, you can mix columns and full entities freely):

>>> query = session.query(Foo.id, Foo.data, Bar).outerjoin(Foo.bars)
>>> print query.all()
SELECT foo.id AS foo_id, foo.data AS foo_data, bar.id AS bar_id,
  bar.data AS bar_data, bar.foo_id AS bar_foo_id
FROM foo LEFT OUTER JOIN bar ON foo.id = bar.foo_id
[]
[(1, u'f1', None), (2, u'f2', None), (3, u'f3', Bar(u'b1')), (4, u'f4', None), (5, u'f5', Bar(u'b2')), (5, u'f5', Bar(u'b3')), (5, u'f5', Bar(u'b4'))]

Above we're using outerjoin(Foo.bars) to say "outer join from the foo table to the bar table". Foo.bars was configured via the backref for Bar.foo. We can see we get f5 back three times since three Bar rows match.

So what if we'd like to select all the Foo and Bar rows, but like before we want to get the last three Foo``s ? We can't do the same thing we did earlier - a straight LIMIT will be limited by the total number of rows, including the multiple ``f5 rows:

>>> print query.order_by(Foo.data.desc()).limit(3).all()
SELECT foo.id AS foo_id, foo.data AS foo_data, bar.id AS bar_id,
  bar.data AS bar_data, bar.foo_id AS bar_foo_id
FROM foo LEFT OUTER JOIN bar ON foo.id = bar.foo_id ORDER BY foo.data DESC
 LIMIT 3 OFFSET 0
[]
[(5, u'f5', Bar(u'b2')), (5, u'f5', Bar(u'b3')), (5, u'f5', Bar(u'b4'))]

Once again, to get the right data, we can do some less straightforward IN or EXISTS methodology, or we can do the most direct thing and join bar to the subquery of foo which we'd like. Query allows us to build up the statement exactly as we'd do it when thinking in SQL:

>>> query = session.query(Foo).order_by(Foo.data.desc()).limit(3)

from_self() will create the subuquery for us, but we also want to change the columns we're selecting from, since we'll be adding Bar via an outer join. For this purpose from_self() takes the same parameters as session.query():

>>> print query.from_self(Foo.id, Foo.data, Bar).outerjoin(Foo.bars).\
...     order_by(Foo.data).all()
SELECT anon_1.foo_id AS anon_1_foo_id, anon_1.foo_data AS anon_1_foo_data,
  bar.id AS bar_id, bar.data AS bar_data, bar.foo_id AS bar_foo_id
FROM (SELECT foo.id AS foo_id, foo.data AS foo_data
FROM foo ORDER BY foo.data DESC
LIMIT 3 OFFSET 0) AS anon_1
LEFT OUTER JOIN bar ON anon_1.foo_id = bar.foo_id
ORDER BY anon_1.foo_data
[]
[(3, u'f3', Bar(u'b1')), (4, u'f4', None), (5, u'f5', Bar(u'b2')), (5, u'f5', Bar(u'b3')), (5, u'f5', Bar(u'b4'))]

Above, we can see that not only does the subquery created by from_self() target the result columns to our Foo entity, it also adapts the join criterion so that everything just works.

Those who have worked with eager loading might recognize the above query - it is in fact the same kind of query that's been available since 0.1, that of "eager loading" a set of rows with a LEFT OUTER JOIN, but intelligently wrapping the primary query in a subquery so that LIMIT/OFFSET remains effective. The basic idea like this:

>>> from sqlalchemy.orm import eagerload
>>> for f in session.query(Foo).options(eagerload(Foo.bars)).\
...             order_by(Foo.data.desc()).limit(3):
...     print f, [b for b in f.bars]
...
SELECT anon_1.foo_id AS anon_1_foo_id, anon_1.foo_data AS anon_1_foo_data,
  bar_1.id AS bar_1_id, bar_1.data AS bar_1_data,
  bar_1.foo_id AS bar_1_foo_id
FROM (SELECT foo.id AS foo_id, foo.data AS foo_data
FROM foo ORDER BY foo.data DESC
LIMIT 3 OFFSET 0) AS anon_1
LEFT OUTER JOIN bar AS bar_1 ON anon_1.foo_id = bar_1.foo_id
 ORDER BY anon_1.foo_data DESC
[]
Foo(u'f5') [Bar(u'b2'), Bar(u'b3'), Bar(u'b4')]
Foo(u'f4') []
Foo(u'f3') [Bar(u'b1')]

Where above, we just select the Foo objects, and the Bar rows are delivered to collections attached to each Foo object. The fact that we've applied limit(3) indicates that the selection of Foo rows should take place within a subquery, to which the Bar rows are left outer joined. We didn't render the query quite as cleanly in 0.1 but by the 0.4 series we had gotten it to this point.

Finally, we can adapt our eager loaded query above to look just like the "last three rows of foo, outer joined to bar, ordered by 'data'" query by combining the eagerload() with from_self():

>>> for f in session.query(Foo).order_by(Foo.data.desc()).\
...             limit(3).from_self().options(eagerload(Foo.bars)).\
...             order_by(Foo.data):
...     print f, [b for b in f.bars]
...
SELECT anon_1.foo_id AS anon_1_foo_id, anon_1.foo_data AS anon_1_foo_data,
 bar_1.id AS bar_1_id, bar_1.data AS bar_1_data,
 bar_1.foo_id AS bar_1_foo_id
FROM (SELECT foo.id AS foo_id, foo.data AS foo_data
FROM foo ORDER BY foo.data DESC
LIMIT 3 OFFSET 0) AS anon_1
LEFT OUTER JOIN bar AS bar_1 ON anon_1.foo_id = bar_1.foo_id
ORDER BY anon_1.foo_data
[]
Foo(u'f3') [Bar(u'b1')]
Foo(u'f4') []
Foo(u'f5') [Bar(u'b2'), Bar(u'b3'), Bar(u'b4')]

I'm super excited about the 0.5 release (not to mention 0.6 for which we have a lot planned) since it represents the coming together of the original vision of offering the full relational model, years of feedback from real users with lots of production experience, and an ever more solid maturity to the internals which at this point have probably had about three full turnovers in construction.


Tags with SQLAlchemy

October 10, 2008 at 10:46 AM | Code, SQLAlchemy

Wayne Witzel gives us a very nice tutorial on how to implement simple tagging with SQLAlchemy. It's a totally straightforward example with nice usage of 0.5 Query paradigms as well as some SQL expression language integration (which looks familiar from the ML the other day...).