-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal to simplify the local development flow: conan install + use your build system #19
Proposal to simplify the local development flow: conan install + use your build system #19
Conversation
We work with several internal teams each of whom choose to use conan in different ways. I feel it would be a great loss if the |
I'm not sure about removing
Losing |
design/019-local_development_flow.md
Outdated
|
||
|
||
## Migration notes | ||
These commands are not frequently used in automated or CI flows, mostly developer side commands. No migration issues expected. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Every CI workflow at our establishment is using conan build
. Not every CI flow in an a Conan application produces a package, remember there are projects that can be pure consumers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, conan build
might be maintained if there is enough support, not a problem. Just that it will operate slightly differently regarding the saved state and the arguments. I would like to understand, if the generate()
abstraction is good enough, wouldn't it be interesting to have a "native build" calling CMake or Visual or whatever directly instead of conan build
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just that it will operate slightly differently regarding the saved state and the arguments.
Could I still give it an install folder to get the state from?
wouldn't it be interesting to have a "native build" calling CMake or Visual or whatever directly instead of conan build?
In some cases, but then it changes from
invoke build
to
- If you're on this project then use
scons
(ETA: We've got a proposal to get that project out of SCons and into CMake; it's just a matter of when) - And if you're on this project, you can use
cmake --build
- But if you want to be more direct, then you need to figure out if the underlying CMake generator was
Unix Makefiles
orXcode
orVisual Studio
- ...
I mean, we have some people who'll reach for an IDE and know they can find the project in the build folder, and some people who just hit the text files with an editor, and then know they can invoke build
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are also using conan build
in our CI when testing that packages can be built correctly on PRs, without polluting the CI server's cache. I find it much more elegant than using native tools as it makes CI script much simpler and leaves the complexity of using special build tool for each specific platform. We abstract away the builds for iOS/Android/Windows/... in conan package that all our packages python_requires
. This gives us also the ability to have versioning on build procedure (i.e. old versions supported only ios device builds, new also support Mac Catalyst builds). By having that in our CI script we would be at risk of using incompatible build procedure (i.e. new Mac Catalyst build) on older version of the package, yielding incomprehensible error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good, I see the case, and how it can be convenient.
Could I still give it an install folder to get the state from?
This is exactly what we are trying to get rid of. That state is really problematic, always buggy and incomplete, it is really a nightmare. The proposal is to pass conan build
the arguments that you will pass to have that state, like you do to conan install
, mostly the --profile
. Just use the same arguments as conan install
, or skip conan install
, as it can be already callled by conan build
, as it will expand the graph anyway to recover the state.
The performance penalty should be very reduced, our benchmark for large graphs with hundreds of deps (when already installed in the cache) is a few seconds, which is typically much lower than the build itself, and negligible in CI time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of skipping conan install
and just using conan build
👍 as long as conan build
is kept.
Please @michaelmaguire @datalogics-kam read carefully the proposal, it is not saying the |
The proposal originally said:
which I downvoted. I would be happy to change to an upvote if the propsal is changed to state that |
@memsharded I probably need more time to think about it. Since this is a proposal, I don't really have a prototype to try out, which might alleviate my concerns. If we can guarantee that |
when I started with conan I also used build/install/package, because it mapped to what I know from rpm and other package format creation. Meanwhile I have no use for them anymore. So I am fine with it |
Although I would be OK with the changed behaviour of The only problem with current implementation of Currently, our developers manually clean the local cache after developing and testing the packaging procedure and our CI server has a very ugly hack to ensure that temporary packages exported while building pull requests are not retained in the cache for other builds in order to ensure that current build does not affect the other build being performed on the same server in parallel. Here are the excerpts from our jenkins pipeline scripts: This part tests that package can be built on without polluting the Conan cache at all: script.dir( 'build-conan-package' ) {
script.withEnv(environment) {
script.common.executeCmd( "${scriptPrefix} conan install ../${repository}/${packageSubfolder}/${conanfileName} -s build_type=${buildTypes[buildTypeIndex]} ${runtime} ${profileCmd} ${packageConfiguration} ${buildMissing}" )
// upload any packages that were built by-dependency during `conan install` invocation
script.commonConanBuild.uploadAllBuiltPackages()
// clean any build directories
script.commonConanBuild.cleanTemporaries()
if ( requiresConanSourceInvocation ) {
script.dir( "../${repository}/${packageSubfolder}" ) {
script.common.executeCmd( "conan source ./${conanfileName}" )
}
}
script.common.executeCmd( "${scriptPrefix} conan build ../${repository}/${packageSubfolder}/${conanfileName} --source-folder=../${repository}/${packageSubfolder} --build-folder=. --package-folder=./conan-package" )
script.common.executeCmd( "${scriptPrefix} conan package ../${repository}/${packageSubfolder}/${conanfileName} --source-folder=../${repository}/${packageSubfolder} --build-folder=. --package-folder=./conan-package" )
}
} Next, since script.withEnv(environment) {
def pkgTestError = null
// conan export-pkg requires git revision because of scm=auto in conan packages - create a dummy git repository
// PR builds work with source only, the .git folder is not stashed during checkout as it's large and not required until
// we are actually on branch from which Jenkins should make the release
// this hack is here only because we want to use scm=auto when exporting to local cache, but here we actually
// don't want to export the package to the cache - we just want to `conan test` it
if ( !needCleanupFakeGit ) {
script.dir( repository ) {
script.common.executeCmd( 'git init' )
script.common.executeCmd( 'git add Jenkinsfile' )
script.common.executeCmd( 'git commit -m "dummy"' )
script.common.executeCmd( 'git remote add origin dummy' )
}
needCleanupFakeGit = true
}
def tempPackageName = "${packageName}/${packageVersion}@${conanUsername}/executor${script.env.EXECUTOR_NUMBER}"
script.common.executeCmd( "conan export-pkg ./${repository}/${packageSubfolder}/${conanfileName} ${tempPackageName} --package-folder=./build-conan-package/conan-package --force -s build_type=${buildTypes[buildTypeIndex]} ${runtime} ${profileCmd} ${packageConfiguration}" )
if ( !packageRegisteredForCleanup ) {
packagesForCleanup << tempPackageName
packageRegisteredForCleanup = true
}
// this is to avoid conan error if conan test package does not have exactly the same options as the package being tested
// why does conan even support options on test-package? It makes no sense to have different package options on
// package being tested and test package...
def testPackageConfiguration = packageConfiguration.replaceAll( /-o\s*(\w+(?:=|\s+)[\w'"-]+)/, "-o ${packageName}:\$1" )
def conanTestCommand = "${scriptPrefix} conan test ./${testPackageFolder} ${packageName}/${packageVersion}@${conanUsername}/executor${script.env.EXECUTOR_NUMBER} -s build_type=${buildTypes[buildTypeIndex]} ${runtime} ${profileCmd} ${testPackageConfiguration} ${buildMissing} --test-build-folder=./test-conan-package"
try {
if ( profile.contains( 'ios' ) ) {
// unlock keychain for application signing on ios node
script.withCredentials([script.usernamePassword(credentialsId: script.credentials.getIosKeychainCredentialsID(), passwordVariable: 'KEYCHAIN_PASSWORD', usernameVariable: 'KEYCHAIN_USER')]) {
script.sh "security unlock-keychain -p ${script.env.KEYCHAIN_PASSWORD} ${script.env.KEYCHAIN} && ${conanTestCommand}"
}
} else {
script.common.executeCmd( conanTestCommand )
}
} catch (error) {
script.echo "There was an error during package test: ${error}"
script.error "There was an error during package test: ${error}"
}
} Of course, all this (and much more is in big try block to catch any errors and ensure that following finally block will be executed which cleanups the temporary files: } finally {
if ( needCleanupFakeGit ) {
script.dir( repository ) {
if ( script.common.isWindows() ) {
script.bat 'rd /s /q .git'
} else {
script.sh 'rm -rf .git'
}
}
}
for ( int i = 0; i < packagesForCleanup.size(); i++ ) {
// cleanup all packages for testing
try {
script.common.executeCmd( "conan remove ${packagesForCleanup[ i ]} --force" )
} catch( ignored ) {}
}
} Now, if conan could work with multiple caches, we could simply do a full |
I have a general question on this one. If the And if the above case is accurate, how would create actually build anything? |
@Daniel-Roberts-Bose No, the In any case, it seems that |
We've also set up a bunch of invoke tasks that consume the |
But that is the point the |
Yes, that will still be possible, it is unrelated. That comes from the |
Do you already have a solution for the Visual Studio builds, because AFAIK the generator can not be stored in the toolchain files. So a simple |
We also use the conan build and package methods in CI for functional tests. After building and uploading our product conan package we consume it by the functional test recipe and build against it. But we do not want to populate the cache with its artifacts, since we only want to store the junit results. Therefore building it locally and use the package method to copy our tests results is quite useful. Edit:
Since we need to rework all conanfiles when upgrading to 2.0 the |
A slightly off-topic but still somehow related question regarding the generated |
In my projects, developers have to build locally packages that they are not necessarily familiar with. The conan build command provides a nice abstraction and developers don't need to open the Readmes to figure out how to compile everything. We could adapt our recipes to work with the new proposed implementation. So I would give a thumbs up to this proposal if the command is kept. |
@jasal82 So while the generator cannot be specified via toolchain files, the generator's toolset can be. So the command would be |
Obviously we're not the only company that already invested several man-years for creating sophisticated Jenkins pipelines to actually get something built with Conan ;)
Just out of curiosity, it seems you're sharing the same Conan cache between different builds. Do you have the cache on shared storage or is everything built on the same machine? We decided that it's better to have everything containerized and start from a clean cache (for reproducability reasons), but for CI it could be beneficial sometimes to have shared cache semantics. |
@ohanar I see, but if it's not possible to include everything in the toolchain file I cannot agree with this proposal. It would be hard to keep track of the |
@jasal82 @DoDoENT Please recall that the Conan cache is not designed for concurrency, so it shouldn't be shared among concurrent CI jobs. Although it might work for some cases, it can also easily break in unexpected ways in many other situations. It can be specially bad if the cache is put in a shared folder, as any possible synchronization mechanism would be completely invalid in such shared folders. |
Exactly what @ohanar said: This is one of the main ideas of the proposal. That this should be possible. It is really bad, and many, many users complain that the integration with build systems is not transparent and that you actually need |
@jasal82 For us it would be an anti-feature to enforce a particular cmake generator for a given configuration. We have different developers who use different cmake generators for the same target platforms. In your use case, you can always write a wrapper script in the generate method, so something like: def generate(self):
...
if self.settings.compiler == "msvc":
# write script to enforce a particular generator
with open("run_cmake.bat", "w") as file:
file.write('cmake -G"Visual Studio 2017" -DCMAKE_TOOLCHAIN_FILE="conan_toolchain.cmake" %*\n')
def build(self):
if self.settings.compiler == "msvc":
# use our script
self.run(rf'.\run_cmake.bat {self.source_folder}')
...
else:
... |
@memsharded @ohanar I thought about it some more and I think generating wrapper scripts could solve the problem for us. As long as there is a solution for merging the Conan toolchain file with a potential upstream toolchain file as mentioned in my other comment above I'd say we're good with the proposal. |
There are 2 different things: using build system as usual and configuring it. Configuring was never an easy part even before where one needs to tune dozens of parameters according to own needs. As I complained before invoking CMake manually is still much more to type and prone to errors. Conan elegantly solved all these issues by encapsulating this knowledge in the (uniform) form of options and settings. But you're too focused on CMake. There are other generators and it's worse that different build systems used in the same company for different components, so having a simple uniform workflow to set up development working copy significantly lowers the entry barrier.
Or maybe provide a better alternative for just "developer" flow 😊 |
@jasal82 You mean like Well, from Conan architecture perspective this could be a reasonable alternative if such a generator is provided by Conan, or if built-in toolchains provide this as an option. P.S. Upvoted because I do believe that all options should be converted into values in the toolchain file(s) (which we already do) and let that deprecation of |
A few concerns from my side:
|
Those variables can also be defined in the toolchain. Current CMakeToolchain allows to define The new MesonToolchain generator is already there as well. Same with MSBuildToolchain. All will operate with the same idea.
The proposed flow is instead:
It will be basically the same lines, just instead of a local |
My understanding is that all flags that are actually enforced by current However, what bothers me is the removal of
Yes, we do. Initially, we planned on having the same conan cache for multiple Jenkins executors that may use it in parallel, but this didn't work correctly, and, as @memsharded said, it's not something Conan was designed for. Then we decided to have separate conan cache for each Jenkins executor. We initially tried with having each build have the clean cache, but this was way too slow for us, so we decided to keep the cache between different builds on the same Jenkins executor. This increased our PR builds by incredible amounts (I was actually planning on giving talk about that in the inaugural ConanDays, but the pandemic prevented that). For example, a single android debug arm64 Now, if you consider that actual compilation of the project takes about 11 seconds, you will clearly see why we don't want to start with empty cache on each build (we'd rather spend the saved time on running more tests). Then, a CI server needs to perform such build for both debug and release for 4 architectures of android, 3 on ios (device, simulator, catalyst), 1 on windows, 2 on Linux (gcc and clang), 1 on macOS and 1 (currently - we plan to add additional flavour) on emscripten, the time spent on downloading conan packages gets significantly larger than the actual build and test time of the project. We simply can't afford that. Furthermore, if there are multiple Jenkins executors doing lots of conan API calls to the Artifactory server it sometimes drives it to the breaking point, where Artifactory simply has too many connections and refuses new ones - in those cases, we get failed builds that need to be re-played. We achieve the reproducibility by keeping the cache clean - we don't allow local packages that are being tested with
The whole reason why we are using conan is because of its very well designed binary cache - this reduced our build times from more than an hour (while building the entire codebase from source each and every time) to just several minutes (provided that the cache is full). We then used the saved time to run much more different (and more complex) tests on our PR builds. |
Unfortunately, it's not. NDK is just one example of a larger package. There are also emscripten SDK and lots of our internal packages that are several hundred megabytes in size. But that is not the largest problem (it can be worked around with fast disks and fast network) - the problem is recipe fetching, which is slow and sequential no matter how fast network you have - on my benchmark just downloading recipes for 76 packages took more than half of the reported time (the download caches does not contain recipes, only binaries). This is why I advocate for the non-polluting cache.
I agree. I voted in favour of simpler I have issues with exporting the untested packages into the cache, namely the need to export it to the cache in order to test it and inspect that the package layout is as expected.
I would argue that the current solution is more complex - because, as @kenfred stated:
By enabling to test the conan package (and inspect its layout, which is just a type of test) without needing it to export to local cache, the complexity would be lowered - because the cache would then be just the cache for what's in the remotes (something a cache is supposed to be) |
@memsharded Can we have a seperate discussion about the Conan Cache topic, also with feedback from the developers? As for this topic, I've also given a thumbs up now. I will try to integrate the |
One of the issues that I see with the assertion that Those packages might not work either, they are still "under testing", because recipes might not be complete for the current platform or subsystem, and now the cache would be polluted exactly in the same way that if you did a So the Conan cache must be understood as a bidirectional storage, from the server to the client and from the client to the server. In both directions packages in the cache can be broken, then aiming by design that the cache is always "pristine" is impossible. I agree with @KerstinKeller that we are deviating from the main proposal of the thread, and this should be taken separately to another thread. There will be more discussions and other proposals related the development flow and the cache, which doesn't mean we could agree on the following (as described in this proposal):
|
Indeed, this is true. I agree. |
Yes, I agree that with the current way you designed the cache, it is impossible to keep it pristine. There could very well be a design, of maybe a multi-layer cache, with a readonly layer, that gets fed from from the servers and a writable layer where packages can be created locally, and a staging layer for uploads. (however that is realized technically). Or there could be a design, where, in source package creation is the default behavior to a I hope that you agree that some of the problems that users are facing (for me personally, |
I think you're getting hung up on the "in development" terminology. Of course no software is ever "done." If you'd like, think about it as a versioning and package immutability issue. The cache should only contain packages that have been released, given an official version and revision. If the package fails on my platform, I can be confident it's because the released packaged is deficient and not because I was actively developing it. Personally, I would only release a package through CI, so the would make that cache unidirectional and give confidence that all packages on the server and in the cache are released packages.
I know it has seemed a bit off topic, but it is actually completely relevant to this proposal. I have to downvote because keeping |
@kenfred this is what is confusing to me. The result of the |
I am assuming that layouts and editables are going to get a significant redesign. At least that is what @jgsogo implied here. You can see in that same thread that I'm strongly advocating getting rid of layouts and instead use package folders. Trying to describe how to reach into the build output folder is problematic and redundant since everything is already known via To summarize my proposal:
This has a ton of benefits, which I've tried to describe. You can see that this idea relies on the fact that |
@kenfred your proposal about workspaces (and co-development and local flow in general) has an important problem: it cannot work with the This has been the major challenge with workspaces and editables, I really wish it was possible for build systems to automate and understand the |
Thank you for this discussion!
This is not true. The rules and targets of these systems can run any custom step, expect outputs, and make those outputs dependencies of other steps. With CMake's ExternalProject, there are arguments for Side note: There are some idiosyncrasies in CMake's ExternalProject. If it's found to be unsuitable, custom commands/targets could work, too.
See above. The super project will call All of my babbling is to say, |
You're missing the point: it's not an incremental build. It's a script unconditionally calling a number of commands. As good as always running I think it was me (at least) who insisted on workspace feature should see through the whole build to set up transitive dependencies properly on a file level regardless of whether they come from the same package or different. Without it you can't properly handle diamond (or other complex) package dependencies: you either have to build them one by one or deal with inconsistent builds because dependent packages are built before their dependencies (in the original workspace implementation). Or you need to invent a much more complex meta-generator which can propagate dependency information from Conan down to the build system. And this is not to mention wasting time on waiting while those "incremental" builds finish. This way you get back to the speed of autotools where no-op build may take a minute. |
I'm quite frustrated to see that long list of anti-patterns many of which explicitly discouraged by Conan authors but still pushing for them: using CI workflow for development (exporting packages to local cache, "polluting" it), using developer commands on CI ( No, it's fine to find another ways to use tools, but what is being asked at the bottom of this thread is to use microscope as a hammer: a total misuse of the tool. And ask is "no, we want a microscope to hammer nails with" 😔. My understanding of the Tribe was to commit to try experimental things to get a feedback on how they work, find possible improvements, like being a pre-alpha tester for a feature, and not guarding pre-conan way of setting up a CI and holding everyone back by "we used this command and we would like continue using it". |
And maybe to put points over i: "development flow" is when edit code in IDE with code navigation, debugging and other good stuff, not running do-it-all command in the terminal. Therefore a clear separation between "conan time" and "IDE time": the latter should not involve conan at all, only natural edit-build-debug cycle. |
reading through this, I think conan should have a stable API, and a plugin system, then people can extend it to their needs. It is clear that in the tribe there are mostly people that think a little but more about what they need, and that there are passionated views about each very use case. And that people have perfect ideas on how to solve their very special use case. But conan has to deliver a solid, simple base that covers the very basic work flows. And on which you can build on. |
First of all, I want to share my view about the proposal here. Local workflow (inside IDE development)I think we all agree that Conan will benefit from the effort we are doing regarding toolchains. They will make it possible to develop a package using an IDE in a more natural way. This is local workflow for the same package, it is like using Conan as a consumer. The new The Creation of local builds/packagesI feel like the problem around
Sometimes we need a command to generate a local package (whatever it means, see below) and
Co-development, local/temporary caches,And this topic of a local package is very related to some other comments about temporary or local caches. You should know that we are exploring the look and feel of a new cache, so it is time to write and think about it and gather all the requirements and limitations (conan-io/conan#8510).
Co-developing locally two packages that belong to the same graph is challenging in Conan. We thought that the It looks like the answer can be on a better cache design, shared between projects we are working on (co-development), temporary cache (testing local builds/packages/test_packages),... without adding too much complexity for the solo-developer that wants a tool that works. Implementation won't be the problem if we accept a database-based cache (conan-io/conan#8510), so we need to think about the business logic. Looks like we need a shared cache for all the Conans running in a machine, it will contain packages coming from the remote and local packages commited/promoted to it, and then we need some staging cache that we can share between different projects (co-development) or a single project (local build/package/test_package) with packages that won't be taken into account by other Conans. Probably read/write cache or multi-layered/multi-colored cache are different conceptualizations of the same idea. These shared/temporary cache can live in the main Conan cache (
And we are dangerously approaching a different topic: inside This is exciting topic and something WE WILL design, I'll try to organize this information, split it into different topics and provide a place to talk about them. Otherwise, all these valuable discussions will be hard to follow. |
@jgsogo Thanks! I'm concerned about your direction of multiple caches and the "doom" of workspaces. Nevertheless, I think we're all mostly in agreement on the main pain points and broad outlines of a solution. I hope that we can discuss further before you get too far down a path in your design. For now, I'll continue the discussion of multiple caches/local workspace build folder at conan-io/conan#5513. |
I really don't like that One of the really nice parts of the "package development lifecycle" was that the piecewise and well-factored steps meant that if you wanted to do specific aspects of the process individually, you could. Requirements resolution is a lot of overhead; it may need network access, and it's at least NP-hard, so a requirements graph of any decent complexity is going to take some time to resolve. It really doesn't belong in a compile-debug-fix cycle. Thus, the utility of The one thing I'd be missing with a task refactoring is that Conan sets up the environment prior to running commands. I can make our stuff read the |
We make use of Regarding |
Yes, the idea is that what is necessary to pass to |
The environment management might become more explicit. You can see an ongoing effort in Conan 1.X in conan-io/conan#8534. The goal is to make it similar to other toolchains: the environment needs to become explicit and finer control of what is added to the environment is needed. Specially with the build-host context scenarios, it is necessary to distinguish a bit more the "build-environment" from the "run-environment", and there are a few blocked issues/bugs/requests by the fact that there is a single The overall idea is still remains, but it might have a different interface, instead of |
We have added a few further notes to this proposal in 7951d20, to connect with other efforts, for example we have started to consider a 2 level cache design as the result of this discussion. With these comments, we think the proposal is good to be merged and we can start working on implementation in the Thanks very much all for all these useful discussions and feedback! |
Is this issue also part of that? It's been idle for 14 days now... |
@DoDoENT not really. That initiative is so far about pure environment and env-vars, while the "build_modules" issues belong to |
The local development flow in Conan 1.X makes very challenging to do a developer build that is identical to the one created by the
conan create
package, because the build helpers in thebuild()
method have a lot of logic and inject dynamically a lot of information into the build. The proposal is:conan install
, and then use the normal build tools. The recipegenerate()
method should write files to disk containing all the information necessary to build, in the form of toolchains, find_packages, or environment definition, that can be used directly by developers.conan package
command can be removed as it is not providing any value thatconan export-pkg
cannot do.conan install
time: conanbuildinfo.txt, conaninfo.txt, graph_info.json will be removed, as their only use case is to save state so theconan build
command can partially recover information.conan build
command might dissapear if not relevant anymore. If it is maintained, it will be stateless, executing first aconan install
and computing the full dependency graph, so it doesn't depend on any locally saved state, and will be less confusing from a UX point of view.