Skip to content
This repository has been archived by the owner on Nov 9, 2020. It is now read-only.

compose yml with datastore selection #1315

Closed
nbaillie opened this issue Jun 1, 2017 · 35 comments
Closed

compose yml with datastore selection #1315

nbaillie opened this issue Jun 1, 2017 · 35 comments

Comments

@nbaillie
Copy link

nbaillie commented Jun 1, 2017

im ok with the creation of volumes like the bellow:
docker volume create --driver=vsphere --name=MyTestVol@VMWARE-DOCKER -o size=1gb

But how can i add the @VMWARE-DOCKER component to the yml for stack deployment?

i have tried Like.....
volumes:
db:
driver: vsphere
driver_opts:
size: 120Gb
datastore: VMWARE-DOCKER

and.....
volumes:
db@VMWARE-DOCKER:
driver: vsphere
driver_opts:
size: 120Gb
datastore: VMWARE-DOCKER

and some other variations.

@pdhamdhere
Copy link
Contributor

@nbaillie Thanks for reporting issue. @datastore is optional. By default volumes are created on same datastore as VM. All VMs on same datastore can use volume without specifying @datastore qualifier.

Not able to specify @datastore in compose file is a BUG. Thanks for catch.

@lipingxue
Copy link
Contributor

@nbaillie The following is a workaround:

  1. Create the volume
docker volume create --driver=vsphere --name=MyTestVol@VMWARE-DOCKER -o size=1gb


  1. use the volume as "external volume" in compose file:
volumes:
db:
   external:
     name: "MyTestVol@VMWARE-DOCKER" 

May I know why you need to specify the datastore name explicitly? By default volumes are created on the same datastore as VM. Thanks!

@nbaillie
Copy link
Author

nbaillie commented Jun 1, 2017

Thanks for the workaround, and I'm happy to help catch this.

We have the VMs Primary Volume on a vVol datastore and want to have the Docker volumes not on vVol as our storage layer has better support and features for non-vVol datastore additionally we are looking at using different performance tiers of storage for certain volumes, Logs,db,archives
etc.

@ckotte
Copy link

ckotte commented Jun 3, 2017

this is also necessary if your worker nodes are distributed over several datastores.. If you don't specify the datastore and the container fails over to a worker that is stored in another datastore, the container can't have access to its persistent data..

@pdhamdhere
Copy link
Contributor

Tracking issue with docker/compose #4886

@lipingxue
Copy link
Contributor

docker/compose #4886 was closed. Filed a new issue with moby/moby #33529.

@madjam002
Copy link

Is it possible to have the datastore as a driver_opt as well? Just an idea, and it means we don't have to worry about special characters in the volume name.

@lipingxue
Copy link
Contributor

@madjam002 Thanks for your suggestion. We are trying to figure out the best approach to fix this issue and having the datastore as a driver_opt will be one approach we will evaluate.

@msterin
Copy link
Contributor

msterin commented Jun 7, 2017

@madjam002 @nbaillie - I believe we already support default_datastore configuration (with a few caveats), in experimental multitenancy support and related admin config command. It's experimental so not very smooth, but we are working on it, and we would also appreciate any feedback/suggestions. See below.

To define default datastore (and many other things like access control or size quotas) you would need to init the configuration, and then configure the desired default-datastore for vmgroup _DEFAULT. (you can configure default datastore for any other vmgroup as well, but would need to define vmgroup first using admin vmgroup command).

Assuming the desired name is vsanDatastore (can be any shared DS) , the commands on ESX look as follows:

alias admin=/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py
admin config init --datastore vsanDatastore   # <=== use some shared datastore. The config DB is tiny
admin vmgroup update --name _DEFAULT --default-datastore vsanDatastore  # <== this is where you want to place all volumes by default

Note that at any moment you can use --help in the admin CLI to get more info

  1. . You need to do the 'config init' command on each ESX box once. Support for autoconfig (when you would do it only once) is not done yet.
  2. 'vmgroup update' needs to be run only once

CAVEATS for config on a shared datastore:

  • VMFS or VSAN: under heavy management load (i.e. many volumes created/mounted; I/O does not matter) on multiple ESXs, some operations (volume create/mount) may fail due to VMFS locking issue. This is why it is experimental :-)
  • using NFS for config is risky- on some NFS server the DB can get corrupted due to partial NFS locking implementation on some servers.
  • We did not test is with VVOL

Or, you can use local configuration:

  • each ESX:
admin config init --local
admin vmgroup update --name _DEFAULT --default-datastore datastore1

(or you can configure it once and then copy /etc/vmware/vmdkops/auth-db around)

No conflicts here, but unfortunately the local config won't survive ESX reboot, you'd need to

if [ ! -e /etc/vmware/vmdkops/auth-db  ]
then
      cp  /vmfs/volumes/<some_shared_db>/dockvols/vmdkops_config.db /etc/vmware/vmdkops/auth-db
fi

@lipingxue
Copy link
Contributor

@nbaillie @ckotte Based on what @msterin has updated above, we do have a way to configure the default_datastore other than then datastore that VM lives. With this, you don't need to specify the volume as "vol@ds_name" in the YAML file. Could you let us know if it works for you?

@shaominchen shaominchen modified the milestones: 0.17, 0.16 Jun 20, 2017
@shaominchen
Copy link
Contributor

We have provided workaround to customers. To fix this issue completely, we may need to fix docker code. Pushing to next release for now.

@madjam002
Copy link

@shaominchen Is there any documentation on the workaround that is available? Cheers

@shaominchen
Copy link
Contributor

@madjam002 If you can pre-create the volume, you can follow Liping's comments above: #1315 (comment)

If you don't want to pre-create the volume, you can follow Mark's comments above: #1315 (comment)

We have filed the issue to Docker: moby/moby#33529. If this issue cannot be addressed from Docker, we will provide a document for above workarounds.

Hope this helps.

@madjam002
Copy link

@shaominchen Unless I'm missing something, this doesn't allow you to create volumes on various different datastores all from the same compose file? So it's not 100% a workaround.

For example, take a container that needs both a volume on an SSD backed datastore and a slower magnetic datastore.

@lipingxue
Copy link
Contributor

@madjam002 , if you need to create volumes from different datastores from the same compose file, then setting the default_datastore for a vmgroup does not help here.

Can we ask why the pre-create volumes does not work for you? I filed a moby issue moby/moby#33529 and has some discussion with docker engineer there. They asked why using the external volume in compose file (which means you pre-create the volume) does not work. Is there any specific reason that using external volume in compose file not work for you? You can also update your input in that moby issue. Thanks!

@madjam002
Copy link

@lipingxue Pre-creating volumes kind of defeats the purpose of using the Compose file format in the first place - defining all of the resources which should be spun up in one place. But it is a workaround for now I guess.

@msterin
Copy link
Contributor

msterin commented Jun 22, 2017

It is a temporary band aid only, indeed. @lipingxue - what can we do to accelerate the actual solution ?

@tusharnt tusharnt modified the milestones: 0.17, 0.16 Jun 28, 2017
@lipingxue
Copy link
Contributor

moby/moby#33529 is closed and another issue docker/cli#274 in docker/cli repo is created to track this. I have created a PR docker/cli#306 for this.

@tusharnt tusharnt modified the milestones: Biweekly sprint - Timon, 0.17 Jul 17, 2017
@lipingxue
Copy link
Contributor

This has been fixed through docker/cli#306, and will be available on docker 17.08.0.

@pdhamdhere
Copy link
Contributor

@lipingxue We need to add test when Docker 17.08.0 is available. Do you want to file a new issue or reopen this for tracking purpose?

@lipingxue
Copy link
Contributor

#1666 is filed to track add test for this.

@CollinChaffin
Copy link

I am confused as to why this issue would be closed. I had to literally search all the CLOSED issues to find this reference to this issue and it is absolutely a bug! When compose supports but then vomits and cannot simply parse the fully supported default naming format that the fully supported vsphere volume driver uses, and that docker itself uses and itself displays when issuing a "docker volume ls" - this should be made a much higher priority to resolve.

If the darn "@" is "optional" (and I question through my tests if it really even is), and the other half of the docker tools vomits and throws errors when that "optional" half of the name is used........my 30yrs as a dev says perhaps you either fix the half to recognize the "@" 'cause it's here to stay.......or buh-bye Mr. "@" we can't support that with half our code, so we're going to remove that since it's lingering in the "optional" world right now, anyway. :)

My testing:

I have upgraded docker to: 17.09.0-ce-rc3
with compose version: 1.16.1, build 6d1ac21

...and after an hour of attempts at every known syntax including referencing an external name in my compose yml files, I cannot simply add an external vmware vsphere volume with that "@" and run compose without error, so this should really be re-opened until someone can confirm via tests that it has been addressed or clearly document a valid workaround.

@CollinChaffin
Copy link

I wanted to post my current (I have tested several) compose files that seems to match the posted "workaround" from above. I just want ANY way right now to create a VMDK-backed vSphere volume for persistence (which works today), and then actually USE it in Docker.

version:  '2'
services:
  database:
    ....STUFF....
    volumes:
      - svcdb_db:/database_data
    ....STUFF.... 

volumes:
  svcdb_db:
    external:
      name: "svcdb_db@ix2b-VMware"

The above posted config actually gives me this error:

ERROR: Named volume "svcdb_db:/database_data:rw" is used in service "database" but no declaration was found in the volumes section.

If I change it to this (the official docs show "true" as a requirement for "external"):

version:  '2'
services:
  database:
    ....STUFF....
    volumes:
      - svcdb_db:/database_data
    ....STUFF.... 

volumes:
  svcdb_db:
    external: true
      name: "svcdb_db@ix2b-VMware"

Then I get this error (where line 63, column 11 is the beginning of the "name:" line):

ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
  in "/root/svcdb/common.yml", line 63, column 11

And if I change the file to:

version:  '2'
services:
  database:
    ....STUFF....
    volumes:
      - db:/database_data
    ....STUFF.... 

volumes:
  db:
    external: true
      name: "svcdb_db@ix2b-VMware"

Because I have also seen this "workaround" typed out as if the volume param needed to NOT match the actual vsphere volume (before the @) and then use the "name:" to map it back to the real external volume. But, it results in the 1st error:

ERROR: Named volume "db:/database_data:rw" is used in service "svcdb" but no declaration was found in the volumes section.

And finally, to make this whole thing THAT much more absurd, if I simply move the "name:" line unindented as shown in newer examples:

version:  '2'
services:
  database:
    ....STUFF....
    volumes:
      - db:/database_data
    ....STUFF.... 

volumes:
  db:
    external: true
    name: "svcdb_db@ix2b-VMware"

I get a completely different error:

ERROR: The Compose file '/root/svcdb/common.yml' is invalid because:
volumes.db value Additional properties are not allowed ('name' was unexpected)
volumes.db.external contains an invalid type, it should be a boolean, or an object

The sections all follow the +2 indentation rule, and if I simply remove the last volume section and replace the service volume to point to a normal container persistent volume on the host, it builds fine but this issue has rendered the entire vsphere volume driver broken and useless if you cannot actually USE the darn vsphere volumes! Can someone please copy my example here and indicate exactly what needs to change in syntax to supposedly make this work as a workaround? Also, if I am already on 17.09rc3 I am not clear if this was supposedly fixed in 17.08 why it's even an issue at all and either way do we know for certain what build this parsing/schema issue WILL be finally fixed?

Thanks!

@msterin
Copy link
Contributor

msterin commented Sep 24, 2017

Please do not close issue before the fix is verified (unless you close as won't fix or not an issue) .
docker/cli#306 is indeed claims that vol@daatastore now works fine, but we need to check that it works on vDVS.

//CC @lipingxue @tusharnt

@msterin msterin reopened this Sep 24, 2017
@CollinChaffin
Copy link

CollinChaffin commented Sep 24, 2017

Thanks so much @msterin! I swear I have tried every combination now (including the OP's original approach of using standard syntax with full volume name including the "@") before and after the upgrade to 17.09rc3 but just to be safe and re-config it for you, I wanted to carefully run through once each step and provide all the output in case it is somehow anything on my end (and if not - help you guys on your end).

So first, the full output on the versions of this test:

root@vdocker01 [ ~/svcdb ]# docker version
Client:
 Version:      17.09.0-ce-rc3
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   2357fb2
 Built:        Thu Sep 21 02:30:17 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.0-ce-rc3
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   2357fb2
 Built:        Thu Sep 21 02:36:52 2017
 OS/Arch:      linux/amd64
 Experimental: false

I then verified the compose version:

root@vdocker01 [ ~/svcdb ]# docker-compose version
docker-compose version 1.16.1, build 6d1ac21
docker-py version: 2.5.1
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.1t  3 May 2016

I set my file to this syntax which follows the latest doc's syntax to simply indicate "true" for volume external and assumes that now "@" is legal in volume:

version:  '2'
services:
  database:
    ....STUFF....
    volumes:
      - svcdb_db@ix2b-VMware:/database_data
    ....STUFF.... 

volumes:
  svcdb_db@ix2b-VMware:
    external: true    

And then just to be safe, did a FULL REBOOT, brought it back up and very unfortunately, like the OP, still do receive the same error:

ERROR: The Compose file '/root/svcdb/common.yml' is invalid because:
volumes value Additional properties are not allowed ('svcdb_db@ix2b-VMware' was unexpected)

Certainly not trying to hijack the OP's thread if I have some underlying version issue I have missed then I apologize - but ultimately I AM trying to do the same thing as the OP (and others) and receiving the same errors and hopefully my testing can help everyone with this issue

EDIT: Figured out the file version error I was getting when switching - was due to venv setup - but the OP's issue does still seem to be an issue unless I've missed something else.

EDIT: So I had some pollution from venv causing a couple of issues including the original error above that the OP posted of.....however now being pretty darn sure that latest 1.16 is reading the file (because I CAN change the file version to 2.3+), I get a totally different error of:

ERROR: The Compose file '/root/svcdb/common.yml' is invalid because:
volumes value 'svcdb_db@ix2b-VMware' does not match any of the regexes: u'^[a-zA-Z0-9._-]+$'

Which sure seems directly like there is still in fact SOME kind of parsing+schema issue present. I know you guys have the automated tests set up so unless I find any additional issues on my end, I'll await what is hopefully just some direction/confirmation of everything required (minimum compose file version #, syntax, etc.) to support the full vSphere volume name.

@CollinChaffin
Copy link

Unbelievable. So I am on about hour number twelve of trial and error and I can definitively say that I have found a workaround and some definitive answers. Hopefully you Docker guys can still detail if/when this is/was fixed and does/will not require this workaround but hopefully this testing I have done will further assist you in pinpointing the root cause of this issue #1315.

In my test case, one issue was if you see my posts above, I was using "common.yml" with "extends:". This should NOT be this big of an issue but these findings show just how deep this parsing issue muse be and if you consider the merging of "extends" layers, how out of control this becomes (hence my all day wasted on the MASSIVE number of test combinations). It is also in my case, the reason the above posted workaround was not working.

Scenario 1:

Attempting the workaround for (still) inability to include "@" in vsphere volumes due to above regex parsing & schema issue #1315.

Configuration

docker-compose.yml:

version: '2'
services:

  database:
    extends:
      file: common.yml
      service: database

common.yml:

version: '2'
services:

  database:
    ....STUFF....
    restart: unless-stopped  
    volumes:
      - svcdb_db:/db_backups
      
volumes:
  svcdb:
    external:
      name: svcdb_db@ix2b-VMware      

Result

Failure with following error:

ERROR: Named volume "svcdb_db:/db_backups:rw" is used in service "database" but no declaration was found in the volumes section.

As you can see, it IS declared. In fact, I've spent hours RE-declaring it and as shown in my posts above, the friggin parser DOES read the typed section, because it happily complains and throws 100 different errors if you attempt different indentations or keywords like "name:"......but will NEVER work.

Scenario 2:

Attempting the workaround for (still) inability to include "@" in vsphere volumes due to above regex parsing & schema issue #1315.

Configuration

docker-compose.yml:

version: '2'
services:

  database:
    extends:
      file: common.yml
      service: database

volumes:
  svcdb:
    external:
      name: svcdb_db@ix2b-VMware

common.yml:

version: '2'
services:

  database:
    ....STUFF....
    restart: unless-stopped  
    volumes:
      - svcdb_db:/db_backups
      

Result

SUCCESS!

To demonstrate how problematic this parsing seems to be, and why this literally burned an entire day of my life testing the various combinations - if we have well proven from all the posts above that an INVALID ((or so the parser says)) value in the "key" portion in the "volume:" declaration section will throw a terminating error, then logic would dictate that a VALID declaration and/or values are also being parsed and utilized further. Scenario 3 shows for some odd reason, that is not the case.

Scenario 3:

Demonstrate that a INVALID value in an EXTEND yml in VOLUME section despite being parsed for errors, does NOT appear to use any correctly parsed volume data.

Configuration

docker-compose.yml:

version: '2'
services:

  database:
    extends:
      file: common.yml
      service: database
      
volumes:
  svcdb:
    external:
      name: svcdb_db@ix2b-VMware

common.yml:

version: '2'
services:

  database:
    ....STUFF....
    restart: unless-stopped  
    volumes:
      - svcdb_db:/db_backups
      
volumes:
  svcdb_db:
    external: FALSE
    driver: WRONGDRIVER

Result

SUCCESS! What?!? Of note, it shows that some routines on the back-end after the parser does a questionable job at validation, then get very selective and literally IGNORE certain values (note above stating the external as "FALSE", and using a totally INVALID, non-existent driver name) - as long as it has found them in the root docker-compose.yml. However, all historical behavior matches the documentation that these values should be MERGED and clearly in this case of volume declaration - they are not.

Well, then there is no way all the usage could be in the extend and the declaration could be ONLY in the root docker-compose.yml, where it is not even actually referred to once, right? Wrong:

Scenario 4:

Test whether the volume declaration section for vsphere volumes can be missing in the very extend file in which they are referenced, and placed in the root where is it not being actively referenced, unlike any of the other declaration sections.

Configuration

docker-compose.yml:

version: '2'
services:

  database:
    extends:
      file: common.yml
      service: database
      
volumes:
  svcdb:
    external:
      name: svcdb_db@ix2b-VMware

common.yml:

version: '2'
services:

  database:
    ....STUFF....
    restart: unless-stopped  
    volumes:
      - svcdb_db:/db_backups

Result

SUCCESS! What?!? Okay another unexpected result. Now, suddenly parsing that extend file that continually threw parsing errors reading the "required" volume declaration, now is suddenly okay that it is totally GONE. For volume declaration, it appears as long as it is present in the root docker-compose.yml, it can be completely void in the extend yml in which all the declared volumes are used.

Note the upshot - if you simply MOVE the WORKING declaration section from docker-compose.yml into common.yml - IT RESULTS IN ERROR. Move it back - things work.

Current workaround for using vsphere volume names containing the "@datastore" from my testing:

  1. In the root docker-compose.yml (note this if using EXTENDS), at the bottom add your volume declaration section, but TRUCATE everything after the "@" both in the volume USAGE (in ALL yml files) but in the declaration.
  2. In the root docker-compose.yml (note this if using EXTENDS), at the bottom then add the 2 lines, one being "external:", followed by the next being "name:" and each indented +2. For "name:" this is where you can insert your full vSphere volume name in quotes (ie. name: "database_volume@vspheredatastore")
  3. If you use EXTEND, when it comes to your vSphere volumes - do not bother declaring them at ALL in the extend yml, as the most you get is parser errors and NONE of the data is even (currently) used and for some reason, NO ERROR is currently thrown if this is not present and you have completed Added WIP to Readme.md #1 & 2.

@lipingxue
Copy link
Contributor

@CollinChaffin Thanks for your feedback and detailed test. According to this release notes, the fix docker/cli#306 has been included in build 17.09.0-ce. I will try to verify with this new ce build.
Another issue I noticed in your example is the "version" in compose file is "2", could you try "3.4" which the fix is supposed to work in this compose version.

@lipingxue
Copy link
Contributor

lipingxue commented Sep 25, 2017

I have verified the with docker 17.09.0-ce-rc3, the fix to docker/cli#306 has been included in this build and it works as expected.
docker version used in test:

root@sc-rdops-vm02-dhcp-52-237:~# docker version
Client:
 Version:      17.09.0-ce-rc3
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   2357fb2
 Built:        Thu Sep 21 02:33:35 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.0-ce-rc3
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   2357fb2
 Built:        Thu Sep 21 02:32:16 2017
 OS/Arch:      linux/amd64
 Experimental: false


Please see the following example:

root@sc-rdops-vm02-dhcp-52-237:~# cat postgres.yml 
version: "3.4"
 
services:
 
  postgres:
    image: postgres
    ports:
      - "5432:5432"
    volumes:
      - postgres_vol:/var/lib/data
    environment:
      - "POSTGRES_PASSWORD=secretpass"
      - "PGDATA=/var/lib/data/db"
    deploy:
      restart_policy:
        condition: on-failure
      placement:
        constraints:
          - node.role == worker
volumes:
   postgres_vol:
      name: "postgres_vol@sharedVmfs-1"
      driver: "vsphere"
      driver_opts:
        size: "1GB"

Deploy a stack using the above yaml file:

root@sc-rdops-vm02-dhcp-52-237:~# docker stack deploy -c postgres.yml postgres
Creating network postgres_default
Creating service postgres_postgres

After stack deploy, volume "postgres_vol@sharedVmfs-1" has been created by vSphere driver as expected.

root@sc-rdops-vm02-dhcp-52-237:~# docker volume ls
DRIVER              VOLUME NAME
vsphere:latest      postgres_vol@sharedVmfs-1

root@sc-rdops-vm02-dhcp-52-237:~# docker volume inspect postgres_vol@sharedVmfs-1
[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "vsphere:latest",
        "Labels": null,
        "Mountpoint": "/mnt/vmdk/postgres_vol@sharedVmfs-1/",
        "Name": "postgres_vol@sharedVmfs-1",
        "Options": {},
        "Scope": "global",
        "Status": {
            "access": "read-write",
            "attach-as": "independent_persistent",
            "attached to VM": "worker2-VM2.0",
            "attachedVMDevice": {
                "ControllerPciSlotNumber": "160",
                "Unit": "0"
            },
            "capacity": {
                "allocated": "79MB",
                "size": "1GB"
            },
            "clone-from": "None",
            "created": "Mon Sep 25 21:35:24 2017",
            "created by VM": "worker2-VM2.0",
            "datastore": "sharedVmfs-1",
            "diskformat": "thin",
            "fstype": "ext4",
            "status": "attached"
        }
    }
]

Wait for a while, check the service

root@sc-rdops-vm02-dhcp-52-237:~# docker service ps postgres_postgres 
ID                  NAME                  IMAGE               NODE                        DESIRED STATE       CURRENT STATE            ERROR               PORTS
vcqjxzyz9ff5        postgres_postgres.1   postgres:latest     sc-rdops-vm02-dhcp-52-237   Running             Running 24 seconds ago                       
root@sc-rdops-vm02-dhcp-52-237:~# 
root@sc-rdops-vm02-dhcp-52-237:~# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
2y5d7jfsd6cu        postgres_postgres   replicated          1/1                 postgres:latest     *:5432->5432/tcp

@lipingxue
Copy link
Contributor

@CollinChaffin, we don't need the workaround now as I have verified the fix above. Could you try it at your side and let me know if you have any question? Thanks!

@msterin
Copy link
Contributor

msterin commented Sep 25, 2017

@lipingxue - thanks for validating. It looks like the main damage was from closing the issue too early and referring the wrong version (above: This has been fixed through docker/cli#306, and will be available on docker 17.08.0.). As I mentioned, going forward let's close issues only after validating with the proper version.

@CollinChaffin - sorry about the headache, please let us know if 17.09 (and yaml 3.4) finally works as expected on your setup

@lipingxue
Copy link
Contributor

@nbaillie @CollinChaffin Could you verify that this has issue has been fixed at your side with docker 17.09 and yaml 3.4? Thanks!

@shuklanirdesh82
Copy link
Contributor

@nbaillie @CollinChaffin Please reopen if you are still facing any issue.

@lipingxue
Copy link
Contributor

lipingxue commented Oct 31, 2017 via email

@AndrewSav
Copy link

@lipingxue, thank you for your response. You cannot see my update not because the issue is closed but because I deleted it. I deleted it because I realized that I was doing something silly.

@lipingxue
Copy link
Contributor

lipingxue commented Oct 31, 2017 via email

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests