-
Notifications
You must be signed in to change notification settings - Fork 262
External libraries
The versioning scheme of node-pre-gyp handles the core complexity of producing binaries that will run against different platforms, architectures, and node versions.
But one more problem remains: your binaries will depend on a C and C++ library implementation that may or may not be stable across different operating system versions. And some node C++ modules might have external shared library dependencies. These challenges can be overcome but takes some attention. I plan to provide some basic docs about best practices and gochas developers will need to know.
Here is a start:
The easiest method is to compile external deps as static libraries instead of shared libraries. This is the approach taken for a variety of dependencies like libpng, libprotobuf, or boost in https://github.com/mapnik/mapnik-packaging. Alternatively shared libraries can be copied the node C++ module and install_name_tool
(OSX) or chrdir
(linux) can be used to ensure those local shared libs are used instead of potentially globally installed ones. Finally, another option is to create a gyp configuration for external libraries so that they are compiled and linked on the fly. This is easy for small C/C++ libraries and difficult for larger, more complex ones. An example of this method is show in the test apps: test app 3 shows how to use gyp to statically link a statically compiled library and test app 4 shows how to use gyp to dynamically link and shared library.
OS X is dead easy. You just set compiler flags to tell the compiler how old of a mac version you want to support. For example, if you want your binaries to run on versions as old as 10.7 just do:
export CFLAGS="-mmacosx-version-min=10.7"
export CXXFLAGS="-mmacosx-version-min=10.7"
export LDFLAGS="-mmacosx-version-min=10.7"
node-pre-gyp build
Be aware that currently the Node build scripts default to targeting 10.5
so if you don't do anything your module should hypothetically work on >= 10.5: https://github.com/joyent/node/blob/d0ff900a65e4250ad49dcf02e07262af1fa5de25/common.gypi#L206
The main gocha on OS X is that sometimes C++ code requires features not possible on older OS X versions and you'll get a compiler error if the -mmacosx-version-min
is set to too old a version. For example for -std=c++11
support you need at least 10.7
set.
Usually just works, but the big gocha is whether the C++ runtime has been statically linked or dynamically linked.
-
node.exe statically links the windows c++ lib so it can be installed and run without any auxiliary installs. If your module does not link any external dll's then you should be good. Your binary will run out of the box just like node.exe does (for the given architecture).
-
But if your app depends on any custom C++ libraries that have been dynamically linked then users who install your module will need to have installed the C++ Redistributable Package for the given Visual Studio version you built the binaries with. So if you built your binaries with Visual Studio 2010 then your users would need to have installed the
vcredist_x86.exe
from http://www.microsoft.com/en-us/download/details.aspx?id=5555. Often users already have done this once for some application in the past and errors will only present themselves from this problem on vanilla machines. And the older the visual studio version you use the more likely the user will already have installed thevcredist_x86.exe
.
The key option that controls static vs dynamic linking of your module to external deps is the RuntimeLibrary type. For example node-mapnik links mapnik.dll dynamically and therefore has this override in the common.gypi, which makes it depend on vcredist_x86.exe
:
'msvs_settings': {
'VCCLCompilerTool': {
'RuntimeLibrary': '2' # /MD
}
}
On linux libc versioning is a challenge. If you build on a very new linux version with the latest libc then users on older linux systems may not all the libc symbols available to run your binaries.
Here are the options:
If you build on the oldest debian/ubuntu/etc distribution you want to support - it will work on that distro and all in the future (glibc at least). For example, if you build on Ubuntu LTS they you can rest assured it should work on that LTS version from now in the future and all future Ubuntu releases that come after.
- Pro: reliable method, also what Node.js core does: https://github.com/joyent/node/pull/9098 | https://github.com/joyent/node/issues/6801
- Con: Getting access to old machines. For example, Travis is great for automating builds. But Travis uses Ubuntu Precise and Heroku uses Ubuntu Lucid, so your binaries will not run on Heroku or RHEL if built on Travis.
As described at https://rjpower9000.wordpress.com/2012/04/09/fun-with-shared-libraries-version-glibc_2-14-not-found/ its possible to compile on more recent linux but still support older linux by forcing the older version of a versioned glibc symbol to be used.
- Pro: Some apps might only need one symbol fixed to work on older linux
- Con: Requires trial and error to get working and not yet working in node-sqlite3 despite https://github.com/mapbox/node-sqlite3/commit/da0800bf8be009e7b356022312761aa3117208de
Listaller is basically designed to solve this problem, as well as others.
Note: now named Limba: http://people.freedesktop.org/~mak/limba/faq/#native-replace
- Pro: Creates portable builds out of the box without glibc issues, just have to override the CC/CXX flags
But lots of questions:
- Does it support c++11 yet?
- Any runtime speed penalties to using it over gcc or clang?
- Do binaries built with it require any custom libgcc or other libraries to be around?
More references on linux issues: