-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mysql support #685
mysql support #685
Conversation
wow! |
"fixed" codacy by excluding mysql_tests/** |
The warning seems related to pylint-dev/pylint#2315 |
wow, I just come with questions about plans to support different databases and see this :) very cool |
Cool ! |
Sync'ed with latest master and added bakery. Note: aiomysql doesn't support PREPARE at the moment. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job! I think we could merge this to master first, release 1.1b2 and address the remaining issues in separate PRs.
# noinspection PyArgumentList | ||
query = query.returning(*cols) | ||
|
||
async def _execute_and_fetch(conn, query): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh wow, MySQL is really uneasy. The only concern for this method is, we're issuing 2 queries but leaving the transaction handling to the user. I'm not 100% sure but we may need to either wrap this in a (sub-)transaction or use CTEs - I think this is also tied to the "auto-commit" issue, please see above/below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thought about it, and I can't find a strong reason to do subtransaction.
BTW, How SQLAlchemy handles this is to mark the object as dirty and run select next it's accessed. This is not I would like to use.
|
||
# noinspection PyUnreachableCode | ||
async def test_acquire_timeout(): | ||
e = await create_engine(MYSQL_URL, minsize=1, maxsize=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we unify the parameter naming to min_size
/max_size
by adding an adapter in the dialect/pool? In GINO2 I think we'll give up using the native asyncpg/aiomysql pools and switch to the native SQLAlchemy pool, which has only pool_size
and max_overflow
. For transition, I think we could support all 4 parameters in GINO 1.4 (version number to match SQLAlchemy 1.4/2.0, as well as leaving some space for GINO 1.2/1.3 to add features on top of SQLAlchemy 1.3 in the future if any) and it might be easier if min_size
/max_size
is consistent between pg and mysql.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The decision to make here is how to manage users' expectation. The parameters will be passed to the used engine. Shall we use the same or we want to customize some common ones.
I don't have a preferred answer now.
mysql_tests/test_crud.py
Outdated
import gino | ||
|
||
db1 = gino.Gino() | ||
await db1.set_bind(MYSQL_URL, autocommit=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For autocommit
: looks like this is a difference between asyncpg and DB-API. asyncpg has no concept of autocommit
- any queries are executed in its own transaction and committed if there is no explicit transaction like a leading BEGIN;
(similar to autocommit=True
but SQLAlchemy is removing autocommit in 2.0). This is actually the nature of the PostgreSQL binary protocol. As GINO < 1.1 is very asyncpg-specific, all APIs are assuming "auto-commit" - statements are simply "committed" when executed, unless they are wrapped in an explicit transaction.
I'm wondering if the MySQL protocol has similar behaviors or not - even though both psycopg2 and pymysql provided the (simulated) "non-auto-commit" feature. Either way, in the current GINO2 WIP, I'm using the "driver-level auto-commit" to keep the GINO API be consistent with "auto-commit", so maybe we should do the same for MySQL in GINO 1.1, so that users don't have to specify autocommit=True
for GINO to behave consistently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't taken a closer look, but per MySQL doc, it's enabled by default. I'm not sure why it's not the case here. It can be caused by aiomysql.
I set it here just for easier testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's set to False by aiomysql. I think we can set it to None which means to use server default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix! I'm drafting a doc about autocommit and SQLAlchemy 2.0, hopefully that could provide more ideas.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Published here: https://python-gino.org/docs/en/master/explanation/sa20.html
Error again after adding two fields;
|
Just convert datetime.now into a string 。
|
Thanks for review. I plan to do cleanup, check |
291189c
to
2d59c73
Compare
@oleeks Thanks for the report. It's fixed now. |
None passing to aiomysql means it'll respect the server setup for autocommit.
2d59c73
to
e36553a
Compare
it's me again ..... When updating the same data concurrently, "NoSuchRowError" appears. Is there any better suggestion?
|
@oleeks is it related to concurrent updates? could you share the model and code you're using? |
okey.I am operating on business code,Initiating two operations at the same time, logically speaking, there should be no problems;
|
Is the transaction enabled by default, or does it need to be added? What to do with additional addition |
In MySQL result, affected rows means the number of rows that get updated, but not the number of matched rows. So this can't be use as an indicator that if the rows actually exist.
It's fixed. The reason is I misunderstood the meaning of For the transaction question, in MySQL, autocommit is set to true by default. If you need transactions, you'll need to explicitly wrap the block in a transaction. @oleeks Thanks for your experiments for this PR and bug reports! Ideally I should apply it to my own project but sadly I don't have one now, so I miss a lot of cases. Really appreciate your help! |
@ Not very familiar with the use of transactions, Is used like this?
```
@router.post('/create-order')
async def create_order():
async with db.transaction():
account = await Account.query.where(Account.uid ==
uid).gino.first()
new_balance = ( account.balance-
Decimal('0.01')).quantize(Decimal('0.00'))
await account.update(balance=new_balance).apply()
```
At the same time, due to concurrency, the value of `account.balance` from
asynchronous query is the same,
and the balance referenced by new_balance is the same every time,
which causes an error in the result.
Tried using lock, But the impact is too big, use synchronization in this
scenario?
If so, can you provide a sample code for reference?
Or have better suggestions in this scenario
```
lock = asyncio.Lock()
@router.post('/create-order')
async def create_order():
async with lock:
# async with db.acquire():
account = await Account.query.where(Account.uid == uid).gino.first()
new_balance = (account.balance -
total_cost).quantize(Decimal('0.00'))
await account.update(balance=new_balance).apply()
```
…On Tue, Sep 15, 2020 at 12:51 AM Tony Wang ***@***.***> wrote:
It's fixed. The reason is I misunderstood the meaning of affected_rows in
the result of MySQL, which means how many rows actually get updated. So in
your second request, the value is the same, causing the affected_rows to be
0.
For the transaction question, in MySQL, autocommit is set to true by
default. If you need transactions, you'll need to explicitly wrap the block
in a transaction.
@oleeks <https://github.com/oleeks> Thanks for your experiments for this
PR and bug reports! Ideally I should apply it to my own project but sadly I
don't have one now, so I miss a lot of cases. Really appreciate your help!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#685 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKI42ZGO6O7WQMWRVEERWDSFZCZ5ANCNFSM4NK5VINQ>
.
|
@oleeks right, lock here is probably not a good idea. |
I don’t see any relevant examples in the gino documentation. Is this how to use it? Doesn't feel right
|
isolation level is easier, but it needs to be set for each session if this is not applicable for other transactions. For optimistic lock, I'd suggest to use a version instead of balance. To use it we'd need some improvements to show affected rows. class Account:
version = Column(Integer, nullable=False)
account = await Account.query.where(Account.uid == uid).gino.first()
await Account.update.values(balance=new_balance, version=account.version+1).where(and_(Account.id == account.id,
Account.version == account.version)).gino.status() The second update can't match |
@wwwjfy thaks |
I'll merge this first. Feel free to raise issues 🙏 |
Does MySQL works ok with Gino? Or exists any reason why info about it not in README.md in repo?:) and not in docs |
A workable version based on master before baked queries.
It's a "painful" journey patching the existing system.
The main pain points:
The code is structured catering for PostgreSQL. It'll be easier if we have a generic implementation in 2.0 integrating with other databases like SQLite
Refs #381