Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker support on Mac OS X with boot2docker #488

Closed
jkozlowski opened this issue Jul 2, 2015 · 9 comments
Closed

Docker support on Mac OS X with boot2docker #488

jkozlowski opened this issue Jul 2, 2015 · 9 comments

Comments

@jkozlowski
Copy link

Hi,

I followed instructions on #337 to be able to run a ghcjs project on a Mac and got this:

$ stack exec -- cabal --config-file=cabal-ghcjs-config update
WARNING: Using boot2docker is NOT supported, and not likely to perform well.
tar: ./.bash_logout: Cannot open: Permission denied
tar: ./.profile: Cannot open: Permission denied
tar: ./.bashrc: Cannot open: Permission denied
tar: Exiting with failure status due to previous errors

@snoyberg suggested that @manny-fp might have some workarounds for this, would you be able to share?

@snoyberg
Copy link
Contributor

snoyberg commented Jul 2, 2015

I just noticed #194, which I believe is designed to address this case. If so, @manny-fp please close this and let's focus discussion around #194.

@snoyberg snoyberg closed this as completed Jul 2, 2015
@snoyberg snoyberg reopened this Jul 2, 2015
@borsboom
Copy link
Contributor

borsboom commented Jul 2, 2015

Yes, #194 is the same thing. Basically the workaround goes something like this:

  1. Create a VM that has Docker server installed.
  2. Modify Docker config in the VM so that it listens on a TCP socket.
  3. Create a user in the VM with the same UID and GID as the user on your host OS.
  4. Set up a reliable shared filesystem between the host and the VM, in the same same location on the filesystem (e.g. you if you want share your home directory /home/myuser, make sure it's also at /home/myuser in the VM). Note: VirtualBox's "shared folders" are not the solution for this, they much too slow for building Haskell code. I've had good luck with NFS, except for the occasional stale file handle error.
  5. On the host, point DOCKER_HOST at the Docker daemon on the VM.

With things in their current state, I'd recommend instead just using Vagrant to create a basic VM with Docker installed, and run your stack commands in that VM directly. Just make sure you're using something besides the default Virtualbox shared folders for folder syncing.

@borsboom borsboom closed this as completed Jul 2, 2015
@bijoutrouvaille
Copy link

I wanted to share one simple workaround that has worked well for me so far. It worked on a set of fairly small projects, so I hope that someone can point out the flaws in this, whatever they may be, or find it useful. It goes thusly:

  1. Run a fresh debian container with /root/.stack volume exposed (-v /root/.stack). I call it stackstore
  2. Build a debian image with stack and other dependencies. I call the image stack. See my Dockerfile below.
  3. Run a stack container with --name myproj -it --volumes-from stackstore -v $PWD:/proj -w /proj stack /bin/bash options. $PWD should point to your current Haskell project.

Now you should be in your project folder as seen from inside your container, and you can do the usual stack init, stack setup, stack ghci and so on. You can launch more of stack containers for other projects, and they will share the stack cache on the stackstore container. I chose debian because of the size and ubiquity. I imagine other distributions can work too. Finally, I'm not sure how much this relies on VirtualBox' shared folders, but I haven't experienced slower builds yet.

FROM debian:jessie

RUN apt-get -qq update
RUN apt-get -qq install curl
RUN curl -L https://github.com/commercialhaskell/stack/releases/download/v0.1.3.1/stack-0.1.3.1-x86_64-linux.gz | \
      gunzip > /bin/stack && chmod 755 /bin/stack

RUN apt-get -qq install build-essential
RUN apt-get -qq install libgmp-dev
RUN apt-get -qq install zlib1g-dev

# fixes https://github.com/commercialhaskell/stack/issues/793
ENV LANG=C.UTF-8 

CMD ["/bin/bash"]

@nrolland
Copy link
Contributor

hi @bijoutrouvaille are you still happy with this setup ?

@borsboom
Copy link
Contributor

@nrolland: Some more progress was made over at #194 (comment). I'm using the setup described there quite successfully.

@bijoutrouvaille
Copy link

@nrolland it works so well for me, that I doubt I would spend the time converting to the official method, were it hypothetically completed.

@borsboom
Copy link
Contributor

borsboom commented Dec 1, 2015

@bijoutrouvaille Your example is basically running stack in a Docker container (with smart volume mounts to avoid doing extra work), which is a totally valid way to do things, but isn't really using Stack's Docker integration. One advantage of what you're doing is that ~/.stack is on the boot2docker VM's local disk, which means you don't suffer the overhead of Virtualbox's shared folders for all the snapshot dependencies (only your own projects have to go through the shared folders, and if they're pretty small you might not notice the slowness).

@nrolland
Copy link
Contributor

nrolland commented Dec 1, 2015

I tried @borsboom 's vagrant (thank you for putting it out there) and could compile with mostly no pb

I completed that with what turns out to be a long text for information / discussion.
(not the best place to discuss that, happy to move it elsewhere/break in pieces etc.. )


I am running on a version of stack from git on my mac, which means unreleased version 0.1.9.0 at this time. So when I tried to stack docker build, which builds within the docker build image, it tries to - and can not - find such a version of among the stack released for linux and available for download.

That's really fair game, I was happy to see one more moving bit I did not think of pinned down along the build chain. beside, the binary to use in the docker builder can be specified in the stack.yaml file in "stack-exe :" from the configuration documentation so I could just ssh into vagrant image and installing the version manually I suppose


Other people use the raw docker, as a VM + launching commands via Dockerfile :

http://www.alfredodinapoli.com/posts/2015-11-03-how-i-deploy-haskell-code.html
stack in a docker container with all fpco/stack-build dependencies

https://github.com/eiel/docker-haskell-stack/blob/master/Dockerfile_WithWarp
all commands launched from Dockerfile


In my case I wanted to deploy to heroku using their docker deployment, so I needed to :

  • build to target their machine
  • make a docker image compatible with their deployment story

For building If you want to use stack docker ( benefits : automatically get the same stack version, deploy etc..?) you can specify custom buildimages in the stack.yaml (cf configuration documentation again).

That highlight that it would be nice to have a repository of images and/or Dockerfiles, which build the stack toolset on top of various base images used by amazon, heroku, etc... to ease targeting for different venues. That would be easier if fpco dockerfile were published, as it would serve as a reference point.


To solve the entire build + deploy story, @eiel explains あそこ how he is building from within docker and transforming a build into a deployment environment : from the heroku base to one with stack to one with warp, to one with the app which is really the deploy one : copies the binary in the /app directory conforming to the way heroku constrains image to be deployed, all in this lovely (hmm) Dockerfile format.

heroku is one such case but I imagine other platform use their own image with build and deployment requirements, and we have a pattern here.


(rant about value of container and why we might need compositionallity to build them)
one thing I am not clear with is how those specifics ruins the purported advantage of using "container", which are not standard as they have to conform to some constraint. I guess one benefit is that it's making those constraints explicit-ish through Dockerfile which if shared among people to deal with the constraints of the various platform, makes it easier. but docker file are not compositional, so we'll likely end up with 10K unmaintained Dockerfiles instead of a core of well maintained components used in 10K different ways. But it's kind of what nix provides with the language to build from individual pieces.


So may be the universal setup would be (would it ?):

  1. Build
    • one build docker base image per provider (heroku, amazon,...), with a similar to fpco/stack-buld Dockerfile (or otherwise obtained) on top of the vanilla base image for each provider
    • use stack docker build with stack.yaml pointing to that build image to produce the binaries
      Any other way to produce good binary for the target platform would work, raw stack, stack with nix etc. . here what is at stake is reproducibility of the build.
  2. Deploy
    • from a runtime docker base image for the provider, (so for heroku, would be heroku/cedar:14 on dockerhub) with a Dockerfile similar to "fpco/stack-run" , where nix has also been added (like @datakurre does in nix-builder.docker) and platform specific things like here
    • then either :
      • add your dependencies in a Dockerfile, like here (maybe splitting to a two phase to reuse some containers for frequently used one, manually make sure you copy to the correct places etc...)
    • or
    • add your dependencies to a nix environment in a deploy.nix file, (at minimum, the binary built previously, but potentially other as well like warp, libXXX), and, with a builder script for the platform (like here taking care of copying output at the standard location for heroku etc.. as in manual Dockerfile), build the runtime container for your app, (like the Dockerfile here but built from the runtime base docker image for the said provider)

That would comply to the "standard" out there and get a compositional handling of dependencies instead of monolithic Dockerfile. The project could be run locally with the same prod environment etc.


A simpler alternative is of course to forego docker and to build static binaries in some vm, xcopy deploy them one way or another, which is really the way @adinapoli is using docker for. That is great to sidestep all that complexity, which should not be hard at all if we could factor out some common infrastructure, but until then is a massive pile of ... AFAIK

@datakurre
Copy link

@nrolland As you are using Nix, you might be interested in the upcoming Docker image build support in Nixpkgs NixOS/nixpkgs#11156

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants