-
-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
☂️ v1.0 #1
Comments
mongodb |
@heithemmoumni I am not certain that this is necessary since MongoDB does not use schemas the same way as RDBMS. However, from what I can see, schema validation was introduced in v3.6 so it might be worth looking into. Although I will personally not be looking into MongoDB until v1.0 has landed, I am always open for contributions. So if you have an implementation in mind, feel free to make a PR 👍 |
What about seed data? For example in a test environment I may need to seed a DB to run integration tests, or I may need to have certain seed data on initial setup. Nvm, I see it on the roadmap (I guess I need to read more completely), sorry. |
@awildeep No worries. I am still thinking about how to implement it so if you have any suggestions, I am open for feedback! One option would be to have it in a separate folder (similar to how Knex and Laravel are doing it) and have a command for it. Another option would be to have a method in the migration script called seed (although custom already exists) which runs the population of the table on migration. The third option would be to have both. Do you have any input regarding this or any other ideas? |
@halvardssm my ideal solution could be just folder based segregation. You could even keep calling them migrations but just provide a run argument that executes a specified path to allow it to execute properly. This just leaves you with one command to maintain. There have been many occasions where I would love to have the flexibility of multiple seed paths (Test seeds [with accompanied reset logic to a known good state], vs development seeds [fake data generation], vs demo seeds, vs production initialization seeds, etc). The only issue I see with just allowing for multiple migration folders is in "best practice" of not combining seed data and migration data in the same file. It is hard to enforce, unless you restrict the logic. But if you provide good usage examples, and warn users against bad patterns I think the value gained in not having to maintain multiple pipelines (seed only vs migration only) is worth it. If a user uses seeds in migrations while be warned that it is not in their best interest in the long term, I don't really see a need to enforce it given the complications it requires. |
Any plans to support schemas? (in this sense I am referring to the DB specific schema, like PostgreSQL https://www.postgresql.org/docs/12/ddl-schemas.html) |
@awildeep That's a good idea! I will put it on the roadmap 👍 In the meantime, you can use Could you also explain your use-case so that I get a better idea on how to implement it, and what to support for v1.0? |
@halvardssm Schema's in postgresql represent containers in a DB. They are accessed via dot notation (similar to MySQL targeting across DBs). Per the docs (https://www.postgresql.org/docs/12/ddl-schemas.html):
My current use-cases include:
While I can create a schema just fine with CREATE TABLE my_schema.my_table ( Doing this presently using PostgresError: cross-database references are not implemented: "public.my_schema.my_table" EDIT: this may be an issue in your code, looking at my error it seems that my migration is targeting the public schema (this is the default schema in any postgres DB). So it is effectively not letting me target a schema using dot notation. I think this means I essentially have to execute SQL directly in order to use schema's right now. |
@awildeep The only time public in enforced is on the updatedAt trigger, that is most likely the issue. I was more thinking in a migration, do you use multiple schemas in one migration? I think a simple solution is to add a schema property to the Table class and to use it with the table name by default. I will be a bit busy until Friday, but hopefully, I can do some work on this over the weekend 💪 |
@halvardssm I personally try not to ever do cross schema migrations. Of course there are sometimes exceptions (eg a DB modification that has cross-schema implications) but I think in most cases this can be worked around. |
@halvardssm confirmed, I removed my updatedAt and it migrated properly, and my rollbacks work as expected. I'll play around a bit more this week and see if I run into other issues. What me to open an issue on the updatedAt() trigger? |
@awildeep I see, then it should be a relatively easy fix. If you could open an issue, we can continue with a PR there 👍 Thanks! |
@awildeep I think a good next step to minimize bugs would be to decide whether this module should have its own query builder, or if we should integrate other query builders. Both have their pros and cons, and since you have been the most active here lately, I would like to ask you about your thoughts regarding this (everyone else who reads this comment is, of course, welcome to join in on the discussion). By using our own query builder, we minimize the number of external dependencies we have to consider. However, by maintaining our own query builder, there will have to be put in a fair amount of work, and it would eventually become its own project in the end. By using an external tool (currently looking at Dex which is a port of Knex), we don't have to reinvent the wheel, and can fully focus on maintaining a migration tool. However, with this, we are dependent on a third-party tool which I currently have no control over. As a user, what would you prefer? To use something familiar as Dex/Knex, or that Nessie establishes its own query builder? |
@halvardssm Perhaps it would make sense to reach out to the Dex team as well? As for establishing a new query builder, I am not hugely a fan especially if a dev wants to use their own home brew query builder; being agnostic has some big advantages to me. Also what if someone wants to use an ORM that is not Dex compatible? Ultimately I think nessie is likely 99% of the way to what I need. I honestly only really need a systematic way to run SQL, and while query builders are nice, they tend to always have large deficiencies especially when you factor in multiple DB support and the fact that many features are not available or even synonymous across platforms. |
@awildeep I reached out to the team so I am just waiting for a response regarding this. I have an idea which will make this migration module completely modular in regards to query builders and databases. I will make a PR in the upcoming days which should hopefully cover this transition, and also fix #33 . Thanks for your input, I'll tag you when I have the PR ready 👍 |
Hi. I'm not sure what's the roadmap for |
Hi @iwatakeshi ! Nessie is mainly a database migration module, and from what I understood, Ecto is more like a query execution tool? There are already a lot of alternatives in this regard which are fully compatible with Nessie (https://deno.land/x/ -> and search for Although I am open for this transition (a complete DB module), I think adding a complete query builder is not feasible at the moment without more contributors to the project. So if anyone reads this and think they can help, don't hesitate to reach out! |
@halvardssm Yeah, Ecto has a nice way of separating concerns and I highly recommend anyone to try it out in their free time. :) I would like to contribute, but I recently started to learn the backend stuff so if you don't mind me learning and contributing at the same time, then count me in. |
Sounds good! The TS logic is mostly the same for both FE and BE, so it should be a good learning experience 💪 It's as simple as to open an issue (Choose the idea template), describing what you want to add, get some feedback, and then make a pr 😄 |
Version 1.0 will be scheduled for release on Wednesday the 10th of June! 🎉 After this, the API will be declared as stable and follow semantic versioning as much as possible. |
Following features are needed for v1.0:
Deno version:
Databases:
Migration CLI:
Structural:
Query Builder:
Add alter method for schemaAdd support for foreign key contraintsSupport soft deletesThe text was updated successfully, but these errors were encountered: