Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to start Kubernetes runtime of workspace. Cause: Server 'theia' not available #13844

Closed
hjbbjh opened this issue Jul 15, 2019 · 43 comments
Labels
area/editor/theia Issues related to the che-theia IDE of Che kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system. status/analyzing An issue has been proposed and it is currently being analyzed for effort and implementation approach

Comments

@hjbbjh
Copy link

hjbbjh commented Jul 15, 2019

  • che log:
2019-07-15 04:51:43,843[nio-8080-exec-5]  [DEBUG] [.a.w.s.s.RuntimeInfrastructure 54]   - Start provisioning installer configs for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:43,859[nio-8080-exec-5]  [DEBUG] [e.EnvVarEnvironmentProvisioner 44]   - Provisioning environment variables for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:46,080[nio-8080-exec-5]  [DEBUG] [e.EnvVarEnvironmentProvisioner 54]   - Environment variables provisioning done for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:46,084[nio-8080-exec-5]  [DEBUG] [cyEnvVarEnvironmentProvisioner 48]   - Legacy environment variables not provisioned to workspace 'workspaces112lfo56ngilrgn'.
2019-07-15 04:51:46,093[nio-8080-exec-5]  [DEBUG] [ctsVolumeForWsAgentProvisioner 50]   - Provisioning project volumes for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:46,108[nio-8080-exec-5]  [DEBUG] [w.s.s.p.MachineNameProvisioner 36]   - Provisioning machine names for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:48,109[nio-8080-exec-5]  [INFO ] [o.e.c.a.w.s.WorkspaceRuntimes 433]   - Starting workspace 'test/wksp-s832' with id 'workspaces112lfo56ngilrgn' by user 'admin'
2019-07-15 04:51:48,522[aceSharedPool-0]  [DEBUG] [.k.w.SidecarToolingProvisioner 69]   - Started sidecar tooling provisioning workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:49,450[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 118]  - Start provisioning Kubernetes environment for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:49,451[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 120]  - Provisioning installer server ports for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:49,502[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 123]  - Provisioning logs volume for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:49,518[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 128]  - Provisioning servers & env vars converters for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:49,576[aceSharedPool-0]  [DEBUG] [.c.w.i.k.n.p.CommonPVCStrategy 149]  - Provisioning PVC strategy for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:49,924[aceSharedPool-0]  [DEBUG] [.c.w.i.k.n.p.CommonPVCStrategy 170]  - PVC strategy provisioning done for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:49,925[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 136]  - Provisioning environment items for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:50,069[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 147]  - Provisioning Kubernetes environment done for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:50,077[aceSharedPool-0]  [DEBUG] [.c.w.i.k.w.PluginBrokerManager 119]  - Entering plugin brokers deployment chain workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:50,084[aceSharedPool-0]  [DEBUG] [c.w.i.k.w.b.ListenBrokerEvents 60]   - Subscribing broker events listener for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:50,132[aceSharedPool-0]  [DEBUG] [.c.w.i.k.n.p.CommonPVCStrategy 183]  - Preparing PVC started for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:50,662[aceSharedPool-0]  [DEBUG] [e.c.w.i.k.n.p.PVCSubPathHelper 110]  - Preparing PVC `claim-che-workspace` for workspace `workspaces112lfo56ngilrgn`. Directories to create: [workspaces112lfo56ngilrgn/che-logs-che-plugin-broker/, workspaces112lfo56ngilrgn/plugins/]
2019-07-15 04:51:54,871[aceSharedPool-0]  [DEBUG] [.c.w.i.k.n.p.CommonPVCStrategy 218]  - Preparing PVC done for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:54,873[aceSharedPool-0]  [DEBUG] [o.e.c.w.i.k.w.b.DeployBroker 77]     - Starting brokers pod for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:55,115[aceSharedPool-0]  [DEBUG] [o.e.c.w.i.k.w.b.DeployBroker 105]    - Brokers pod is created for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:51:55,117[aceSharedPool-0]  [DEBUG] [e.c.w.i.k.w.b.WaitBrokerResult 63]   - Trying to get brokers result for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:32,325[aceSharedPool-0]  [DEBUG] [.k.w.SidecarToolingProvisioner 85]   - Finished sidecar tooling provisioning workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:32,327[aceSharedPool-0]  [DEBUG] [.a.w.s.s.RuntimeInfrastructure 54]   - Start provisioning installer configs for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:32,327[aceSharedPool-0]  [DEBUG] [.a.w.s.s.RuntimeInfrastructure 54]   - Start provisioning installer configs for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:32,328[aceSharedPool-0]  [DEBUG] [.a.w.s.s.RuntimeInfrastructure 54]   - Start provisioning installer configs for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:32,329[aceSharedPool-0]  [DEBUG] [.a.w.s.s.RuntimeInfrastructure 54]   - Start provisioning installer configs for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:32,330[aceSharedPool-0]  [DEBUG] [e.EnvVarEnvironmentProvisioner 44]   - Provisioning environment variables for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:32,391[aceSharedPool-0]  [DEBUG] [e.EnvVarEnvironmentProvisioner 54]   - Environment variables provisioning done for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:32,392[aceSharedPool-0]  [DEBUG] [cyEnvVarEnvironmentProvisioner 48]   - Legacy environment variables not provisioned to workspace 'workspaces112lfo56ngilrgn'.
2019-07-15 04:52:32,393[aceSharedPool-0]  [DEBUG] [ctsVolumeForWsAgentProvisioner 50]   - Provisioning project volumes for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:32,402[aceSharedPool-0]  [DEBUG] [w.s.s.p.MachineNameProvisioner 36]   - Provisioning machine names for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:32,665[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 118]  - Start provisioning Kubernetes environment for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:32,666[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 120]  - Provisioning installer server ports for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:32,668[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 123]  - Provisioning logs volume for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:32,669[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 128]  - Provisioning servers & env vars converters for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:33,709[aceSharedPool-0]  [DEBUG] [.c.w.i.k.n.p.CommonPVCStrategy 149]  - Provisioning PVC strategy for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:33,789[aceSharedPool-0]  [DEBUG] [.c.w.i.k.n.p.CommonPVCStrategy 170]  - PVC strategy provisioning done for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:33,791[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 136]  - Provisioning environment items for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:33,797[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 147]  - Provisioning Kubernetes environment done for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:33,798[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 198]  - Provisioning of workspace 'workspaces112lfo56ngilrgn' completed.
2019-07-15 04:52:33,800[aceSharedPool-0]  [DEBUG] [.c.w.i.k.n.p.CommonPVCStrategy 183]  - Preparing PVC started for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:33,982[aceSharedPool-0]  [DEBUG] [e.c.w.i.k.n.p.PVCSubPathHelper 110]  - Preparing PVC `claim-che-workspace` for workspace `workspaces112lfo56ngilrgn`. Directories to create: [workspaces112lfo56ngilrgn/projects/, workspaces112lfo56ngilrgn/m2/, workspaces112lfo56ngilrgn/che-logs-maven/, workspaces112lfo56ngilrgn/plugins/]
2019-07-15 04:52:36,690[aceSharedPool-0]  [DEBUG] [.c.w.i.k.n.p.CommonPVCStrategy 218]  - Preparing PVC done for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:37,756[aceSharedPool-0]  [WARN ] [i.f.k.c.i.VersionUsageUtils 55]      - The client is using resource type 'ingresses' with unstable version 'v1beta1'
2019-07-15 04:52:38,034[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 823]  - Ingresses created for workspace 'workspaces112lfo56ngilrgn'. Wait them to be ready.
2019-07-15 04:52:50,159[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 841]  - Ingresses creation for workspace 'workspaces112lfo56ngilrgn' done.
2019-07-15 04:52:50,697[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 723]  - Begin pods creation for workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:51,050[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 727]  - Creating pod 'workspaces112lfo56ngilrgn.che-jwtproxy' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:51,052[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 754]  - Creating machine 'che-jwtproxy' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:51,999[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 734]  - Creating deployment 'workspaces112lfo56ngilrgn.maven' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:52,000[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 754]  - Creating machine 'maven' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:52,783[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 754]  - Creating machine 'che-machine-exechv2' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:53,044[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 754]  - Creating machine 'theia-ide828' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:53,903[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 754]  - Creating machine 'vscode-java5b1' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:52:53,979[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 740]  - Pods creation finished in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:02,326[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 333]  - Waiting to start machines of workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:06,309[ineSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 458]  - Bootstrapping machine 'che-jwtproxy' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:06,390[ineSharedPool-1]  [DEBUG] [.i.k.KubernetesInternalRuntime 399]  - Performing servers check for machine 'che-jwtproxy' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:06,523[ineSharedPool-1]  [DEBUG] [.i.k.KubernetesInternalRuntime 408]  - Servers checks done for machine 'che-jwtproxy' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:13,786[ineSharedPool-1]  [DEBUG] [.i.k.KubernetesInternalRuntime 458]  - Bootstrapping machine 'vscode-java5b1' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:13,811[ineSharedPool-4]  [DEBUG] [.i.k.KubernetesInternalRuntime 399]  - Performing servers check for machine 'vscode-java5b1' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:13,827[ineSharedPool-4]  [DEBUG] [.i.k.KubernetesInternalRuntime 408]  - Servers checks done for machine 'vscode-java5b1' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:14,134[ineSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 458]  - Bootstrapping machine 'maven' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:14,156[ineSharedPool-4]  [DEBUG] [.i.k.KubernetesInternalRuntime 399]  - Performing servers check for machine 'maven' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:14,166[ineSharedPool-4]  [DEBUG] [.i.k.KubernetesInternalRuntime 408]  - Servers checks done for machine 'maven' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:16,352[ineSharedPool-3]  [DEBUG] [.i.k.KubernetesInternalRuntime 458]  - Bootstrapping machine 'che-machine-exechv2' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:16,363[ineSharedPool-4]  [DEBUG] [.i.k.KubernetesInternalRuntime 399]  - Performing servers check for machine 'che-machine-exechv2' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:16,368[ineSharedPool-4]  [DEBUG] [.i.k.KubernetesInternalRuntime 408]  - Servers checks done for machine 'che-machine-exechv2' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:16,536[ineSharedPool-2]  [DEBUG] [.i.k.KubernetesInternalRuntime 458]  - Bootstrapping machine 'theia-ide828' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:53:16,562[ineSharedPool-4]  [DEBUG] [.i.k.KubernetesInternalRuntime 399]  - Performing servers check for machine 'theia-ide828' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:56:17,280[aceSharedPool-0]  [WARN ] [.i.k.KubernetesInternalRuntime 245]  - Failed to start Kubernetes runtime of workspace workspaces112lfo56ngilrgn. Cause: Server 'theia' in container 'theia-ide828' not available.
2019-07-15 04:56:17,291[ServersChecker]   [DEBUG] [.i.k.KubernetesInternalRuntime 408]  - Servers checks done for machine 'theia-ide828' in workspace 'workspaces112lfo56ngilrgn'
2019-07-15 04:56:21,831[aceSharedPool-0]  [INFO ] [o.e.c.a.w.s.WorkspaceRuntimes 856]   - Workspace 'test:wksp-s832' with id 'workspaces112lfo56ngilrgn' start failed
  • theia log:
root INFO Theia app listening on http://0.0.0.0:3100.
root INFO unzipping the plugin ProxyPluginDeployerEntry {
  deployer:
   PluginTheiaFileHandler { unpackedFolder: '/tmp/theia-unpacked' },
  delegate:
   PluginDeployerEntryImpl {
     originId: 'local-dir:///default-theia-plugins',
     pluginId: 'eclipse_che_ports_plugin.theia',
     map: Map {},
     changes: [],
     acceptedTypes: [],
     currentPath: '/default-theia-plugins/eclipse_che_ports_plugin.theia',
     initPath: '/default-theia-plugins/eclipse_che_ports_plugin.theia',
     resolved: true,
     resolvedByName: 'LocalDirectoryPluginDeployerResolver' },
  deployerName: 'PluginTheiaFileHandler' }
root INFO unzipping the plugin ProxyPluginDeployerEntry {
  deployer:
   PluginTheiaFileHandler { unpackedFolder: '/tmp/theia-unpacked' },
  delegate:
   PluginDeployerEntryImpl {
     originId: 'local-dir:///default-theia-plugins',
     pluginId: 'eclipse_che_theia_containers_plugin.theia',
     map: Map {},
     changes: [],
     acceptedTypes: [],
     currentPath:
      '/default-theia-plugins/eclipse_che_theia_containers_plugin.theia',
     initPath:
      '/default-theia-plugins/eclipse_che_theia_containers_plugin.theia',
     resolved: true,
     resolvedByName: 'LocalDirectoryPluginDeployerResolver' },
  deployerName: 'PluginTheiaFileHandler' }
root INFO unzipping the plugin ProxyPluginDeployerEntry {
  deployer:
   PluginTheiaFileHandler { unpackedFolder: '/tmp/theia-unpacked' },
  delegate:
   PluginDeployerEntryImpl {
     originId: 'local-dir:///default-theia-plugins',
     pluginId: 'eclipse_che_theia_factory_plugin.theia',
     map: Map {},
     changes: [],
     acceptedTypes: [],
     currentPath:
      '/default-theia-plugins/eclipse_che_theia_factory_plugin.theia',
     initPath:
      '/default-theia-plugins/eclipse_che_theia_factory_plugin.theia',
     resolved: true,
     resolvedByName: 'LocalDirectoryPluginDeployerResolver' },
  deployerName: 'PluginTheiaFileHandler' }
root INFO unzipping the plugin ProxyPluginDeployerEntry {
  deployer:
   PluginTheiaFileHandler { unpackedFolder: '/tmp/theia-unpacked' },
  delegate:
   PluginDeployerEntryImpl {
     originId: 'local-dir:///default-theia-plugins',
     pluginId: 'eclipse_che_theia_ssh_plugin.theia',
     map: Map {},
     changes: [],
     acceptedTypes: [],
     currentPath: '/default-theia-plugins/eclipse_che_theia_ssh_plugin.theia',
     initPath: '/default-theia-plugins/eclipse_che_theia_ssh_plugin.theia',
     resolved: true,
     resolvedByName: 'LocalDirectoryPluginDeployerResolver' },
  deployerName: 'PluginTheiaFileHandler' }
root INFO unzipping the plugin ProxyPluginDeployerEntry {
  deployer:
   PluginTheiaFileHandler { unpackedFolder: '/tmp/theia-unpacked' },
  delegate:
   PluginDeployerEntryImpl {
     originId: 'local-dir:///default-theia-plugins',
     pluginId: 'eclipse_che_welcome_plugin.theia',
     map: Map {},
     changes: [],
     acceptedTypes: [],
     currentPath: '/default-theia-plugins/eclipse_che_welcome_plugin.theia',
     initPath: '/default-theia-plugins/eclipse_che_welcome_plugin.theia',
     resolved: true,
     resolvedByName: 'LocalDirectoryPluginDeployerResolver' },
  deployerName: 'PluginTheiaFileHandler' }
root INFO unzipping the plugin ProxyPluginDeployerEntry {
  deployer:
   PluginTheiaFileHandler { unpackedFolder: '/tmp/theia-unpacked' },
  delegate:
   PluginDeployerEntryImpl {
     originId: 'local-dir:///default-theia-plugins',
     pluginId: 'task_plugin.theia',
     map: Map {},
     changes: [],
     acceptedTypes: [],
     currentPath: '/default-theia-plugins/task_plugin.theia',
     initPath: '/default-theia-plugins/task_plugin.theia',
     resolved: true,
     resolvedByName: 'LocalDirectoryPluginDeployerResolver' },
  deployerName: 'PluginTheiaFileHandler' }
root INFO unzipping the plugin ProxyPluginDeployerEntry {
  deployer:
   PluginTheiaFileHandler { unpackedFolder: '/tmp/theia-unpacked' },
  delegate:
   PluginDeployerEntryImpl {
     originId: 'local-dir:///default-theia-plugins',
     pluginId: 'theia_yeoman_plugin.theia',
     map: Map {},
     changes: [],
     acceptedTypes: [],
     currentPath: '/default-theia-plugins/theia_yeoman_plugin.theia',
     initPath: '/default-theia-plugins/theia_yeoman_plugin.theia',
     resolved: true,
     resolvedByName: 'LocalDirectoryPluginDeployerResolver' },
  deployerName: 'PluginTheiaFileHandler' }
root INFO unzipping the plugin ProxyPluginDeployerEntry {
  deployer:
   PluginVsCodeFileHandler { unpackedFolder: '/tmp/vscode-unpacked' },
  delegate:
   PluginDeployerEntryImpl {
     originId: 'local-dir:///default-theia-plugins',
     pluginId: 'vscode-git-1.3.0.1.vsix',
     map: Map {},
     changes: [],
     acceptedTypes: [],
     currentPath: '/default-theia-plugins/vscode-git-1.3.0.1.vsix',
     initPath: '/default-theia-plugins/vscode-git-1.3.0.1.vsix',
     resolved: true,
     resolvedByName: 'LocalDirectoryPluginDeployerResolver' },
  deployerName: 'PluginVsCodeFileHandler' }
root INFO PluginTheiaDirectoryHandler: accepting plugin with path /tmp/theia-unpacked/eclipse_che_ports_plugin.theia
root INFO PluginTheiaDirectoryHandler: accepting plugin with path /tmp/theia-unpacked/eclipse_che_theia_containers_plugin.theia
root INFO PluginTheiaDirectoryHandler: accepting plugin with path /tmp/theia-unpacked/eclipse_che_theia_factory_plugin.theia
root INFO PluginTheiaDirectoryHandler: accepting plugin with path /tmp/theia-unpacked/eclipse_che_theia_ssh_plugin.theia
root INFO PluginTheiaDirectoryHandler: accepting plugin with path /tmp/theia-unpacked/eclipse_che_welcome_plugin.theia
root INFO PluginTheiaDirectoryHandler: accepting plugin with path /tmp/theia-unpacked/task_plugin.theia
root INFO PluginTheiaDirectoryHandler: accepting plugin with path /tmp/theia-unpacked/theia_yeoman_plugin.theia
root INFO PluginTheiaDirectoryHandler: accepting plugin with path /tmp/vscode-unpacked/vscode-git-1.3.0.1.vsix
root INFO Resolved "vscode-git-1.3.0.1.vsix" to a VS Code extension "git@1.0.0" with engines: { vscode: '^1.5.0' }
root INFO PluginTheiaDirectoryHandler: accepting plugin with path /plugins/sidecars
root INFO Deploying backend plugin "@eclipse-che/ports-plugin@0.0.1" from "/tmp/theia-unpacked/eclipse_che_ports_plugin.theia/lib/ports-plugin.js"
root INFO Deploying backend plugin "@eclipse-che/theia-containers-plugin@0.0.2" from "/tmp/theia-unpacked/eclipse_che_theia_containers_plugin.theia/lib/containers-plugin.js"
root INFO Deploying backend plugin "@eclipse-che/theia-factory-plugin@0.0.1" from "/tmp/theia-unpacked/eclipse_che_theia_factory_plugin.theia/lib/factory-plugin.js"
root INFO Deploying backend plugin "@eclipse-che/theia-ssh-plugin@0.0.1" from "/tmp/theia-unpacked/eclipse_che_theia_ssh_plugin.theia/lib/ssh-plugin-backend.js"
root INFO Deploying backend plugin "@eclipse-che/welcome-plugin@0.0.1" from "/tmp/theia-unpacked/eclipse_che_welcome_plugin.theia/lib/welcome-plugin.js"
root INFO Deploying backend plugin "task-plugin@0.0.1" from "/tmp/theia-unpacked/task_plugin.theia/lib/task-plugin-backend.js"
root INFO Deploying backend plugin "@theia/yeoman-plugin@0.0.1-1539189859" from "/tmp/theia-unpacked/theia_yeoman_plugin.theia/lib/theia-yeoman-plugin-backend-plugin.js"
root INFO Deploying backend plugin "git@1.0.0" from "/tmp/vscode-unpacked/vscode-git-1.3.0.1.vsix/extension/out/main"

Received SIGTERM

@hjbbjh
Copy link
Author

hjbbjh commented Jul 15, 2019

After successfully deploying Multiuser Che,Attempting to create and run a new workspace with the "java maven" stack fails

@hjbbjh
Copy link
Author

hjbbjh commented Jul 15, 2019

che 7.0.0

@skabashnyuk
Copy link
Contributor

Hell @hjbbjh
How did you install Che? What is your k8s environment?

@hjbbjh
Copy link
Author

hjbbjh commented Jul 15, 2019

installation via : helm upgrade --install che --namespace eclipse -f ./values/default-host.yaml --set global.ingressDomain=che-eclipse.my-nginx.default.svc.cluster.local ./
k8s version: v1.15.0
os version : 7.6.1810

@skabashnyuk
Copy link
Contributor

Can you be more specific with Che version? We have 7.0.0-rc-4.0-SNAPSHOT in master at this moment.

@hjbbjh
Copy link
Author

hjbbjh commented Jul 15, 2019

che-7.0.0-RC-1.0.tar.gz

@hjbbjh
Copy link
Author

hjbbjh commented Jul 15, 2019

Is it because there's a timeout?Start image eclipse/che-theia:7.0.0-rc-3.0 up unsuccessfully。

It feels like theia hasn't been fully started yet。

@stefanhenseler
Copy link

stefanhenseler commented Jul 15, 2019

Im experiencing the exact same issue. I'm using eclipse/che-theia:7.0.0-rc-3.0. I'm using the following config:

CHE_HOST=che.cnative.io
CHE_PORT=5080
CHE_CORS_ENABLED=true
CHE_DEBUG_SERVER=true
CHE_INFRASTRUCTURE_ACTIVE=kubernetes
CHE_INFRA_KUBERNETES_INGRESS_ANNOTATIONS__JSON={"kubernetes.io/ingress.class": "nginx", "nginx.ingress.kubernetes.io/rewrite-target": "/","nginx.ingress.kubernetes.io/ssl-redirect": "false","nginx.ingress.kubernetes.io/proxy-connect-timeout": "3600","nginx.ingress.kubernetes.io/proxy-read-timeout": "3600"}
CHE_INFRA_KUBERNETES_INGRESS_DOMAIN=cnative.io
CHE_INFRA_KUBERNETES_MACHINE__START__TIMEOUT__MIN=20
CHE_INFRA_KUBERNETES_MASTER__URL=https://kubernetes.default
CHE_INFRA_KUBERNETES_NAMESPACE=dev-tools
CHE_INFRA_KUBERNETES_POD_SECURITY__CONTEXT_FS__GROUP=1724
CHE_INFRA_KUBERNETES_POD_SECURITY__CONTEXT_RUN__AS__USER=1724
CHE_INFRA_KUBERNETES_PVC_PRECREATE__SUBPATHS=true
CHE_INFRA_KUBERNETES_PVC_QUANTITY=1Gi
CHE_INFRA_KUBERNETES_PVC_STRATEGY=common
CHE_INFRA_KUBERNETES_SERVER__STRATEGY=multi-host
CHE_INFRA_KUBERNETES_SERVICE__ACCOUNT__NAME=che-workspace
CHE_INFRA_KUBERNETES_TLS__ENABLED=false
CHE_INFRA_KUBERNETES_TRUST__CERTS=false
CHE_INFRA_KUBERNETES_BOOTSTRAPPER_INSTALLER__TIMEOUT__SEC=60000
CHE_INFRA_KUBERNETES_BOOTSTRAPPER_SERVER__CHECK__PERIOD__SEC=1000
CHE_INFRA_KUBERNETES_WORKSPACE__START__TIMEOUT__MIN=100
CHE_INFRA_KUBERNETES_INGRESS__START__TIMEOUT__MIN=100
CHE_KEYCLOAK_HOST=keycloak.cnative.io
CHE_KEYCLOAK_PORT=5080
CHE_KEYCLOAK_AUTH__SERVER__URL=http://${CHE_KEYCLOAK_HOST}:${CHE_KEYCLOAK_PORT}/auth
CHE_KEYCLOAK_CLIENT__ID=che-public
CHE_KEYCLOAK_REALM=che
CHE_LIMITS_WORKSPACE_IDLE_TIMEOUT=-1
CHE_LOCAL_CONF_DIR=/etc/conf
CHE_LOGGER_CONFIG=org.eclipse.che.workspace.infrastructure.kubernetes=DEBUG,org.eclipse.che.api.workspace.server=DEBUG
CHE_LOGS_APPENDERS_IMPL=plaintext
CHE_LOGS_DIR=/data/logs
CHE_LOG_LEVEL=INFO
CHE_METRICS_ENABLED=true
CHE_MULTIUSER=true
CHE_PREDEFINED_STACKS_RELOAD__ON__START=false
CHE_TRACING_ENABLED=true
CHE_WORKSPACE_AUTO_START=true
CHE_WORKSPACE_DEVFILE__REGISTRY__URL=https://che-devfile-registry.openshift.io/
CHE_WORKSPACE_HTTPS__PROXY=
CHE_WORKSPACE_HTTP__PROXY=
CHE_WORKSPACE_NO__PROXY=true
CHE_WORKSPACE_PLUGIN__REGISTRY__URL=https://che-plugin-registry.openshift.io/v3
CHE_WORKSPACE_PLUGIN__BROKER_WAIT__TIMEOUT__MIN=10
CHE_WSAGENT_CORS_ALLOWED__ORIGINS=NULL
CHE_WSAGENT_CORS_ALLOW__CREDENTIALS=true
CHE_WSAGENT_CORS_ENABLED=true
JAEGER_ENDPOINT=http://jaeger-collector:14268/api/traces
JAEGER_REPORTER_MAX_QUEUE_SIZE=10000
JAEGER_SAMPLER_MANAGER_HOST_PORT=jaeger:5778
JAEGER_SAMPLER_PARAM=1
JAEGER_SAMPLER_TYPE=const
JAEGER_SERVICE_NAME=che-server
JAVA_OPTS=-XX:MaxRAMFraction=2 -XX:+UseParallelGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true -Xms20m 

I'm seeing the exact same log entries as mentioned above, the following line is the last one in the theia pod:

root INFO Deploying backend plugin "git@1.0.0" from "/tmp/vscode-unpacked/vscode-git-1.3.0.1.vsix/extension/out/main"

Ingresses are created fine and seem to work.

The che server logs show the following:

2019-07-15 11:07:38,664[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 118]  - Start provisioning Kubernetes environment for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:38,664[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 120]  - Provisioning installer server ports for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:38,670[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 123]  - Provisioning logs volume for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:38,673[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 128]  - Provisioning servers & env vars converters for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:38,682[aceSharedPool-0]  [DEBUG] [egy$$EnhancerByGuice$$23118e7f 149]  - Provisioning PVC strategy for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:38,777[aceSharedPool-0]  [DEBUG] [egy$$EnhancerByGuice$$23118e7f 170]  - PVC strategy provisioning done for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:38,778[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 136]  - Provisioning environment items for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:38,816[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 147]  - Provisioning Kubernetes environment done for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:38,821[aceSharedPool-0]  [DEBUG] [.c.w.i.k.w.PluginBrokerManager 119]  - Entering plugin brokers deployment chain workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:38,822[aceSharedPool-0]  [DEBUG] [c.w.i.k.w.b.ListenBrokerEvents 60]   - Subscribing broker events listener for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:38,825[aceSharedPool-0]  [DEBUG] [egy$$EnhancerByGuice$$23118e7f 183]  - Preparing PVC started for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:38,935[aceSharedPool-0]  [DEBUG] [e.c.w.i.k.n.p.PVCSubPathHelper 110]  - Preparing PVC `claim-che-workspace` for workspace `workspacezmi16d4ro3x0e8no`. Directories to create: [workspacezmi16d4ro3x0e8no/che-logs-che-plugin-broker/, workspacezmi16d4ro3x0e8no/plugins/]
2019-07-15 11:07:43,335[aceSharedPool-0]  [DEBUG] [egy$$EnhancerByGuice$$23118e7f 218]  - Preparing PVC done for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:43,341[aceSharedPool-0]  [DEBUG] [o.e.c.w.i.k.w.b.DeployBroker 77]     - Starting brokers pod for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:43,713[aceSharedPool-0]  [DEBUG] [o.e.c.w.i.k.w.b.DeployBroker 105]    - Brokers pod is created for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:07:43,715[aceSharedPool-0]  [DEBUG] [e.c.w.i.k.w.b.WaitBrokerResult 63]   - Trying to get brokers result for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,261[aceSharedPool-0]  [DEBUG] [.k.w.SidecarToolingProvisioner 85]   - Finished sidecar tooling provisioning workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,263[aceSharedPool-0]  [DEBUG] [.a.w.s.s.RuntimeInfrastructure 54]   - Start provisioning installer configs for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,263[aceSharedPool-0]  [DEBUG] [.a.w.s.s.RuntimeInfrastructure 54]   - Start provisioning installer configs for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,264[aceSharedPool-0]  [DEBUG] [.a.w.s.s.RuntimeInfrastructure 54]   - Start provisioning installer configs for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,264[aceSharedPool-0]  [DEBUG] [.a.w.s.s.RuntimeInfrastructure 54]   - Start provisioning installer configs for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,265[aceSharedPool-0]  [DEBUG] [e.EnvVarEnvironmentProvisioner 44]   - Provisioning environment variables for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,289[aceSharedPool-0]  [DEBUG] [e.EnvVarEnvironmentProvisioner 54]   - Environment variables provisioning done for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,289[aceSharedPool-0]  [DEBUG] [cyEnvVarEnvironmentProvisioner 48]   - Legacy environment variables not provisioned to workspace 'workspacezmi16d4ro3x0e8no'.
2019-07-15 11:08:02,290[aceSharedPool-0]  [DEBUG] [ctsVolumeForWsAgentProvisioner 50]   - Provisioning project volumes for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,290[aceSharedPool-0]  [DEBUG] [w.s.s.p.MachineNameProvisioner 36]   - Provisioning machine names for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,393[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 118]  - Start provisioning Kubernetes environment for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,394[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 120]  - Provisioning installer server ports for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,396[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 123]  - Provisioning logs volume for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,402[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 128]  - Provisioning servers & env vars converters for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,681[aceSharedPool-0]  [DEBUG] [egy$$EnhancerByGuice$$23118e7f 149]  - Provisioning PVC strategy for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,696[aceSharedPool-0]  [DEBUG] [egy$$EnhancerByGuice$$23118e7f 170]  - PVC strategy provisioning done for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,697[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 136]  - Provisioning environment items for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,702[aceSharedPool-0]  [DEBUG] [etesEnvironmentProvisionerImpl 147]  - Provisioning Kubernetes environment done for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,703[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 198]  - Provisioning of workspace 'workspacezmi16d4ro3x0e8no' completed.
2019-07-15 11:08:02,703[aceSharedPool-0]  [DEBUG] [egy$$EnhancerByGuice$$23118e7f 183]  - Preparing PVC started for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:02,723[aceSharedPool-0]  [DEBUG] [e.c.w.i.k.n.p.PVCSubPathHelper 110]  - Preparing PVC `claim-che-workspace` for workspace `workspacezmi16d4ro3x0e8no`. Directories to create: [workspacezmi16d4ro3x0e8no/projects/, workspacezmi16d4ro3x0e8no/m2/, workspacezmi16d4ro3x0e8no/che-logs-maven/, workspacezmi16d4ro3x0e8no/plugins/]
2019-07-15 11:08:06,791[aceSharedPool-0]  [DEBUG] [egy$$EnhancerByGuice$$23118e7f 218]  - Preparing PVC done for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:08:07,005[aceSharedPool-0]  [WARN ] [i.f.k.c.i.VersionUsageUtils 55]      - The client is using resource type 'ingresses' with unstable version 'v1beta1'
2019-07-15 11:08:07,192[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 823]  - Ingresses created for workspace 'workspacezmi16d4ro3x0e8no'. Wait them to be ready.
2019-07-15 11:09:07,325[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 841]  - Ingresses creation for workspace 'workspacezmi16d4ro3x0e8no' done.
2019-07-15 11:09:07,428[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 723]  - Begin pods creation for workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:07,746[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 727]  - Creating pod 'workspacezmi16d4ro3x0e8no.che-jwtproxy' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:07,749[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 754]  - Creating machine 'che-jwtproxy' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:08,022[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 734]  - Creating deployment 'workspacezmi16d4ro3x0e8no.maven' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:08,026[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 754]  - Creating machine 'maven' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:08,113[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 754]  - Creating machine 'che-machine-execomx' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:08,156[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 754]  - Creating machine 'theia-idesjy' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:08,251[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 754]  - Creating machine 'vscode-javap3v' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:08,270[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 740]  - Pods creation finished in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:08,707[aceSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 333]  - Waiting to start machines of workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:12,335[ineSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 458]  - Bootstrapping machine 'che-jwtproxy' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:12,342[ineSharedPool-1]  [DEBUG] [.i.k.KubernetesInternalRuntime 399]  - Performing servers check for machine 'che-jwtproxy' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:12,367[ineSharedPool-1]  [DEBUG] [.i.k.KubernetesInternalRuntime 408]  - Servers checks done for machine 'che-jwtproxy' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:18,867[ineSharedPool-1]  [DEBUG] [.i.k.KubernetesInternalRuntime 458]  - Bootstrapping machine 'theia-idesjy' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:18,869[ineSharedPool-4]  [DEBUG] [.i.k.KubernetesInternalRuntime 399]  - Performing servers check for machine 'theia-idesjy' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:18,911[ineSharedPool-3]  [DEBUG] [.i.k.KubernetesInternalRuntime 458]  - Bootstrapping machine 'vscode-javap3v' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:18,927[ineSharedPool-1]  [DEBUG] [.i.k.KubernetesInternalRuntime 399]  - Performing servers check for machine 'vscode-javap3v' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:18,940[ineSharedPool-1]  [DEBUG] [.i.k.KubernetesInternalRuntime 408]  - Servers checks done for machine 'vscode-javap3v' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:18,947[ineSharedPool-0]  [DEBUG] [.i.k.KubernetesInternalRuntime 458]  - Bootstrapping machine 'che-machine-execomx' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:18,956[ineSharedPool-1]  [DEBUG] [.i.k.KubernetesInternalRuntime 399]  - Performing servers check for machine 'che-machine-execomx' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:18,977[ineSharedPool-1]  [DEBUG] [.i.k.KubernetesInternalRuntime 408]  - Servers checks done for machine 'che-machine-execomx' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:18,984[ineSharedPool-2]  [DEBUG] [.i.k.KubernetesInternalRuntime 458]  - Bootstrapping machine 'maven' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:19,007[ineSharedPool-1]  [DEBUG] [.i.k.KubernetesInternalRuntime 399]  - Performing servers check for machine 'maven' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:09:19,008[ineSharedPool-1]  [DEBUG] [.i.k.KubernetesInternalRuntime 408]  - Servers checks done for machine 'maven' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:12:19,312[aceSharedPool-0]  [WARN ] [.i.k.KubernetesInternalRuntime 245]  - Failed to start Kubernetes runtime of workspace workspacezmi16d4ro3x0e8no. Cause: Server 'theia' in machine 'theia-idesjy' not available.
2019-07-15 11:12:19,338[ServersChecker]   [DEBUG] [.i.k.KubernetesInternalRuntime 408]  - Servers checks done for machine 'theia-idesjy' in workspace 'workspacezmi16d4ro3x0e8no'
2019-07-15 11:12:21,329[aceSharedPool-0]  [INFO ] [o.e.c.a.w.s.WorkspaceRuntimes 856]   - Workspace 'vdbp-cp/cxp:portal' with id 'workspacezmi16d4ro3x0e8no' start failed

Is there a config option to configure the server check timeout? I've tried tho change the "CHE_INFRA_KUBERNETES_BOOTSTRAPPER_SERVER__CHECK__PERIOD__SEC" to "1000" but this doesn't help.

@stefanhenseler
Copy link

stefanhenseler commented Jul 15, 2019

This is the Devfile I'm using:

apiVersion: 1.0.0
metadata:
  name: portal-dev
components:
  - id: eclipse/che-theia/latest
    type: cheEditor
  - id: ms-vscode/go/latest
    memoryLimit: 512Mi
    type: chePlugin
  - mountSources: true
    command:
      - sleep
    args:
      - infinity
    memoryLimit: 512Mi
    type: dockerimage
    image: 'golang:1.12.4-stretch'
    env:
      - value: '/go:/projects'
        name: GOPATH
      - value: /tmp/.cache
        name: GOCACHE
      - value: '$(echo ${0})\\$'
        name: PS1

@nickboldt nickboldt added area/editor/theia Issues related to the che-theia IDE of Che kind/bug Outline of a bug - must adhere to the bug report template. status/analyzing An issue has been proposed and it is currently being analyzed for effort and implementation approach status/info-needed More information is needed before the issue can move into the “analyzing” state for engineering. labels Jul 15, 2019
@hjbbjh
Copy link
Author

hjbbjh commented Jul 16, 2019

图片
table:che_k8s_server url field have a null host

@stefanhenseler
Copy link

I tried to start the eclipse/che-theia:7.0.0-rc-3.0 image locally, I get the exact same log as if I start it within a workspace. the last line in the log is always:

root INFO Deploying backend plugin "git@1.0.0" from "/tmp/vscode-unpacked/vscode-git-1.3.0.1.vsix/extension/out/main"

Is it possible to test an older version of the editor ? I've checked https://che-plugin-registry.openshift.io/v3/plugins/eclipse/che-theia/ but RC-2.0 is not there anymore. Do you have older versions of the plugin registry still published or what is the best way to test different images?

@skabashnyuk
Copy link
Contributor

@synax can you try to install che with https://github.com/che-incubator/chectl and try again?

@hjbbjh
Copy link
Author

hjbbjh commented Jul 16, 2019

what is the url field of the table che_k8s_server when you start workspace?

@skabashnyuk
Copy link
Contributor

what is the url field of the table che_k8s_server when you start workspace?

That is an url of the running server inside of you workspace. Null in url means that something went wrong. I assume some variables or some components may not be set correctly. Can you try to install che with https://github.com/che-incubator/chectl

@hjbbjh
Copy link
Author

hjbbjh commented Jul 16, 2019

Can you explain my problem about the null host of the server url please? Is it normal?

@skabashnyuk
Copy link
Contributor

Can you explain my problem about the null host of the server url please? Is it normal?

I can't explain it. That is not a normal situation.

@yuqaf1989
Copy link

@skabashnyuk would be something wrong with rbac ? I just read the source code , It seems che will use this url to check whether theia service is available.

@skabashnyuk
Copy link
Contributor

@skabashnyuk would be something wrong with rbac ? I just read the source code , It seems che will use this url to check whether theia service is available.

could be. https://github.com/che-incubator/chectl should handle that.

@stefanhenseler
Copy link

stefanhenseler commented Jul 16, 2019

in my case the query:

SELECT * FROM che_k8s_server

outputs this:

       workspace_id        |    machine_name     |   server_name    |                                       url                                       | status
---------------------------+---------------------+------------------+---------------------------------------------------------------------------------+---------
 workspacewtdw5ajgbbglnlsw | che-machine-exec9j7 | che-machine-exec | ws://serverofc3hm71-che-machine-exec9j7-server-4444.cnative.io/ | UNKNOWN
 workspacewtdw5ajgbbglnlsw | theia-ideqt5        | theia            | http://serverpuxu3s6e-jwtproxy-server-4400.cnative.io/          | UNKNOWN
 workspacewtdw5ajgbbglnlsw | theia-ideqt5        | theia-dev        | http://serveripm8nl31-theia-ideqt5-server-3130.cnative.io/      | UNKNOWN
 workspacewtdw5ajgbbglnlsw | theia-ideqt5        | theia-redirect-3 | http://serveripm8nl31-theia-ideqt5-server-13133.cnative.io/     | UNKNOWN
 workspacewtdw5ajgbbglnlsw | theia-ideqt5        | theia-redirect-2 | http://serveripm8nl31-theia-ideqt5-server-13132.cnative.io/     | UNKNOWN
 workspacewtdw5ajgbbglnlsw | theia-ideqt5        | theia-redirect-1 | http://serveripm8nl31-theia-ideqt5-server-13131.cnative.io/     | UNKNOWN

These are the ingresses created correctly in my che namespace. But I guess the server check fails because the endpoint of the ingress never becomes available, due to the container not starting properly.

@stefanhenseler
Copy link

I'm using Kustomize for my deployment. I've converted the RC-3.0 helm chart using helm convert, what is the difference between chectl and the helm chart? is chectl the preferred way to install che?

@hjbbjh
Copy link
Author

hjbbjh commented Jul 16, 2019

@skabashnyuk but chectl dependent on minikube,it used for local development and testing, not for production。

@hjbbjh
Copy link
Author

hjbbjh commented Jul 16, 2019

@synax why your url field of table che_k8s_server have no host?

@stefanhenseler
Copy link

stefanhenseler commented Jul 16, 2019

I've uploaded my kustomization for reference https://github.com/synax-io/echlipse-che-kustomization

In case you try to deploy this kustomization, some remarks :)

  • My ingress listens on port 5080 and 5443, so if you use the default ports, you need to search and replace 5080 with 80 in the base.
  • In case you use a different port than 80 or 443: When you try to login, you will get an error message: "
    Invalid parameter: redirect_uri". In order to fix this, you need to login to keycloak and add the custom port to Clients > Che-public > Settings in Valid Redirect URIs and Web Origins.

@stefanhenseler
Copy link

stefanhenseler commented Jul 16, 2019

@hjbbjh che creates ingresses for each of the services. So the traffic is then forwarded to the services and the pods. I guess you have a different config than I do. If you use the Kustomization I've posted, you should have the same behavior.

@benoitf
Copy link
Contributor

benoitf commented Jul 16, 2019

@skabashnyuk but chectl dependent on minikube it used for local development and testing, not for production

chectl is also working with openshift, k8s, etc

@hjbbjh
Copy link
Author

hjbbjh commented Jul 16, 2019

@benoitf in https://github.com/che-incubator/chectl,i can see this sentence:Currently chectl requires minikube and helm to be locally installed ,and in https://www.eclipse.org/che/docs/che-7/che-quick-starts.html,i can see this sentence:-p, --platform=platform [default: minikube] Type of Kubernetes platform. Valid values are "minikube", "minishift", "docker4mac", "ocp", "oso".

@slemeur slemeur added the severity/P1 Has a major impact to usage or development of the system. label Jul 16, 2019
@slemeur slemeur added this to the 7.0.0 milestone Jul 16, 2019
@l0rd
Copy link
Contributor

l0rd commented Jul 16, 2019

@hjbbjh yeah the README file hasn't been updated.

@hjbbjh
Copy link
Author

hjbbjh commented Jul 16, 2019

@l0rd how can i replace the file of the directory deploy/kubernetes/helm/che?

@stefanhenseler
Copy link

stefanhenseler commented Jul 16, 2019

Ok, So I've changed the plugin version to next because I see the 7.0.0-rc-4.0-SNAPSHOT image has been published 4 hours ago (https://hub.docker.com/r/eclipse/che-theia/tags). Still the same behavior with the new image.

I've double checked the services and ingresses, something seems to be off here:

server097vqjzm-theia-ide6l7          ClusterIP   10.96.30.127   <none>        3100/TCP,3130/TCP,13133/TCP,13132/TCP,13131/TCP

The service for theia has all the ports from the plugin spec configured and it seems all endpoints are available:

Name:              server097vqjzm-theia-ide6l7
Namespace:         dev-tools
Labels:            che.workspace_id=workspace4bjuq1uamfwen2is
Annotations:       org.eclipse.che.machine.name: theia-ide6l7
Selector:          che.original_name=maven,che.workspace_id=workspace4bjuq1uamfwen2is
Type:              ClusterIP
IP:                10.96.30.127
Port:              server-3100  3100/TCP
TargetPort:        3100/TCP
Endpoints:         192.168.251.168:3100
Port:              server-3130  3130/TCP
TargetPort:        3130/TCP
Endpoints:         192.168.251.168:3130
Port:              server-13133  13133/TCP
TargetPort:        13133/TCP
Endpoints:         192.168.251.168:13133
Port:              server-13132  13132/TCP
TargetPort:        13132/TCP
Endpoints:         192.168.251.168:13132
Port:              server-13131  13131/TCP
TargetPort:        13131/TCP
Endpoints:         192.168.251.168:13131
Session Affinity:  None
Events:            <none>

but the ingress for port 3100 is missing. It seems like this is the only one missing.

The theia container states it is listening on 3100.

NAME               HOSTS                                                                       ADDRESS   PORTS   AGE
che-ingress        che.cnative.io                                                        80      1h
ingress6whc2bbs    serverrynu0hu4-theia-idessk-server-13131.cnative.io                   80      2m
ingressevmmarcq    serverro6ua28x-che-machine-execaxg-server-4444.cnative.io             80      2m
ingressi2eghett    serverrynu0hu4-theia-idessk-server-13133.cnative.io                   80      2m
ingressjwvqhov7    serverrynu0hu4-theia-idessk-server-3130.cnative.io                    80      2m
ingresskvpdusoa    serverrynu0hu4-theia-idessk-server-13132.cnative.io                   80      2m
ingressm6lfbnm8    server4u166s6r-maven-server-8080.cnative.io                           80      2m
ingressyh8pz7qz    serverp8jqsjb0-jwtproxy-server-4400.cnative.io                        80      2m
keycloak-ingress   keycloak.cnative.io                                                   80      1h

I would expect an ingress for all endpoints configured as public:

https://che-plugin-registry.openshift.io/v3/plugins/eclipse/che-theia/next/

Is this behavior expected?

@stefanhenseler
Copy link

stefanhenseler commented Jul 16, 2019

As far as I understand, the readiness checker https://github.com/eclipse/che/blob/7.0.0-rc-3.x/wsmaster/che-core-api-workspace/src/main/java/org/eclipse/che/api/workspace/server/hc/ServersChecker.java just attempts to reach each server for a particular machine.

so in my example, the checker iterates thru all server_name urls where machine_name is theia-ideqt5.

workspace_id        |    machine_name     |   server_name    |                                       url                                       | status
---------------------------+---------------------+------------------+---------------------------------------------------------------------------------+---------
 workspacewtdw5ajgbbglnlsw | che-machine-exec9j7 | che-machine-exec | ws://serverofc3hm71-che-machine-exec9j7-server-4444.cnative.io/ | UNKNOWN
 workspacewtdw5ajgbbglnlsw | theia-ideqt5        | theia            | http://serverpuxu3s6e-jwtproxy-server-4400.cnative.io/          | UNKNOWN
 workspacewtdw5ajgbbglnlsw | theia-ideqt5        | theia-dev        | http://serveripm8nl31-theia-ideqt5-server-3130.cnative.io/      | UNKNOWN
 workspacewtdw5ajgbbglnlsw | theia-ideqt5        | theia-redirect-3 | http://serveripm8nl31-theia-ideqt5-server-13133.cnative.io/     | UNKNOWN
 workspacewtdw5ajgbbglnlsw | theia-ideqt5        | theia-redirect-2 | http://serveripm8nl31-theia-ideqt5-server-13132.cnative.io/     | UNKNOWN
 workspacewtdw5ajgbbglnlsw | theia-ideqt5        | theia-redirect-1 | http://serveripm8nl31-theia-ideqt5-server-13131.cnative.io/     | UNKNOWN
Failed to start Kubernetes runtime of workspace workspace4bjuq1uamfwen2is. Cause: Server 'theia' in machine 'theia-ideqt5' not available.

The error message states that the server theia is not available, which means the url http://serverpuxu3s6e-jwtproxy-server-4400.cnative.io/ doesn't work.

I've checked the ingress and service.

the jwtproxy pod is running, service and ingress are present.

the jwtproxy log shows:

time="2019-07-16T11:32:12Z" level=info msg="Starting reverse proxy (Listening on ':4400')"
time="2019-07-16T11:35:20Z" level=info msg="Received stop signal. Stopping gracefully..."

@stefanhenseler
Copy link

stefanhenseler commented Jul 16, 2019

Here the config of the jwtproxy

kind: ConfigMap
metadata:
  creationTimestamp: "2019-07-16T11:41:54Z"
  labels:
    che.original_name: jwtproxy-config-workspace4bjuq1uamfwen2is
    che.workspace_id: workspace4bjuq1uamfwen2is
  name: workspace4bjuq1uamfwen2is.jwtproxy-config-workspace4bjuq1uamfwen2is
  namespace: dev-tools
data:
  config.yaml: |
    ---
    jwtproxy:
      signer_proxy:
        enabled: false
      verifier_proxies:
      - listen_addr: ":4400"
        verifier:
          audience: "workspace4bjuq1uamfwen2is"
          auth_cookies_enabled: true
          auth_redirect_url: "http://che.cnative.io:5080/_app/loader.html"
          claims_verifiers:
          - options:
              iss: "wsmaster"
            type: "static"
          key_server:
            options:
              issuer: "wsmaster"
              key_id: "workspace4bjuq1uamfwen2is"
              public_key_path: "/config/mykey.pub"
            type: "preshared"
          max_skew: "1m"
          max_ttl: "8800h"
          nonce_storage:
            type: "void"
          upstream: "http://server4ruaow7a-theia-idepl9:3100"
  mykey.pub: |-
    -----BEGIN PUBLIC KEY-----
   MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAw4/4I/50QXKgJ89LSQ6Ne/i3mZvfo1yVba+p8Ifmw+x1/VY8SeP9s3YgMXyMooetYx
    -----END PUBLIC KEY-----

I've checked the upstream service and it exists. But now we are back to the theia container not starting up properly.

is it possible to get debug output from the server checker?

@hjbbjh
Copy link
Author

hjbbjh commented Jul 16, 2019

Try to deploy it via chectl again, now I got errors blow:
图片
It's stuck here,but the postgresql、keycloak、che server already started。

图片
图片

@l0rd @skabashnyuk

today,i try again,get the same error:
图片

@stefanhenseler
Copy link

@hjbbjh I think there is something wrong with your ENV config. Try to use my kustomization values.

@stefanhenseler
Copy link

stefanhenseler commented Jul 16, 2019

Ok, after a bit more testing... I'm not sure if this is actually related to che-theia. I've changed the editor to Eclipse GWT IDE and the behavior is very similar. The only difference is that I actually see an error message during the listener registration.

59) Error injecting constructor, org.eclipse.che.api.core.ServerException: java.io.IOException: Failed access: http://che.cnative.io:5080/api/workspace/workspace4bjuq1uamfwen2is?token, method: GET, response code: 401, message: Authorization token is missed
  at org.eclipse.che.api.project.server.impl.WorkspaceProjectSynchronizer.<init>(WorkspaceProjectSynchronizer.java:58)
  at org.eclipse.che.api.project.server.impl.WorkspaceProjectSynchronizer.class(WorkspaceProjectSynchronizer.java:42)
  while locating org.eclipse.che.api.project.server.impl.WorkspaceProjectSynchronizer
    for the 1st parameter of org.eclipse.che.api.languageserver.WorkspaceConfigProvider.<init>(WorkspaceConfigProvider.java:62)
  at org.eclipse.che.api.languageserver.WorkspaceConfigProvider.class(WorkspaceConfigProvider.java:47)
  while locating org.eclipse.che.api.languageserver.WorkspaceConfigProvider
  while locating org.eclipse.che.api.languageserver.LanguageServerConfigProvider annotated with @com.google.inject.internal.Element(setName=,uniqueId=43, type=MULTIBINDER, keyType=)
  while locating java.util.Set<org.eclipse.che.api.languageserver.LanguageServerConfigProvider>
    for the 1st parameter of org.eclipse.che.api.languageserver.LanguageServerConfigInitializer.<init>(LanguageServerConfigInitializer.java:52)
  at org.eclipse.che.api.languageserver.LanguageServerModule.configure(LanguageServerModule.java:33) (via modules: org.eclipse.che.wsagent.server.WsAgentModule -> org.eclipse.che.api.languageserver.LanguageServerModule)
  while locating org.eclipse.che.api.languageserver.LanguageServerConfigInitializer
    for the 1st parameter of org.eclipse.che.api.languageserver.LanguageServerInitializer.<init>(LanguageServerInitializer.java:78)
  at org.eclipse.che.api.languageserver.LanguageServerModule.configure(LanguageServerModule.java:35) (via modules: org.eclipse.che.wsagent.server.WsAgentModule -> org.eclipse.che.api.languageserver.LanguageServerModule)

could this issue be related to some issue with the jwtproxy and the keycloak config? is there a way to get more detailed info from the server check? so I can confirm what actually goes wrong?

looks like the ?token= is missing...

the CHE_MACHINE_TOKEN variable is configured for the pod. If I append the token, I get access to the workspace via the api. So I wonder why is it not using the token? is this configured somewhere else?

@l0rd l0rd mentioned this issue Jul 16, 2019
85 tasks
@l0rd l0rd removed this from the 7.0.0 milestone Jul 16, 2019
@l0rd l0rd removed the status/info-needed More information is needed before the issue can move into the “analyzing” state for engineering. label Jul 16, 2019
@l0rd l0rd changed the title Failed to start Kubernetes runtime of workspace workspaces112lfo56ngilrgn. Cause: Server 'theia' in container 'theia-ide828' not available Failed to start Kubernetes runtime of workspace. Cause: Server 'theia' in container 'theia-ide828' not available Jul 16, 2019
@l0rd
Copy link
Contributor

l0rd commented Jul 16, 2019

@slemeur clearing the milestone for now. The issue is still under investigation. Not sure if we are able to reproduce, if it's a duplicate etc...

@skabashnyuk please have a look at latest @hjbbjh and @synax tests results

@l0rd l0rd changed the title Failed to start Kubernetes runtime of workspace. Cause: Server 'theia' in container 'theia-ide828' not available Failed to start Kubernetes runtime of workspace. Cause: Server 'theia' not available Jul 16, 2019
@yuqaf1989
Copy link

yuqaf1989 commented Jul 17, 2019

@hjbbjh 's issue : we've changed two part of helm chart
1、 values.yaml: serverStrategy: default-host using default host network mode
2、requirements.yaml:

  - name: prometheus
    repository: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
    version: ^5.4.0
    condition: global.metricsEnabled
  - name: grafana
    repository: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
    version: ^0.7.0
    condition: global.metricsEnabled

because In China we can not connect to google service directory, we use aliyun instead

@yuqaf1989
Copy link

yuqaf1989 commented Jul 17, 2019

Try to deploy via chectl (we've changed chectl source code to prevent updating requirements.yaml),found errors below:

 ❯ ✅  Post installation checklist
    ❯ Che pod bootstrap
      ✖ scheduling
        → ERR_TIMEOUT: Timeout set to pod wait timeout 300000. podExist: false, currentPhase: undefine
d
        downloading images
        starting
      Retrieving Che Server URL
      Che status check
Error: ERR_TIMEOUT: Timeout set to pod wait timeout 300000. podExist: false, currentPhase: undefined
    at KubeHelper.<anonymous> (/snapshot/chectl/lib/api/kube.js:0:0)
    at Generator.next (<anonymous>)
    at fulfilled (/snapshot/chectl/node_modules/tslib/tslib.js:107:62)

but che pod is working, we can access to dashboard, when we try to start a work space, the same error Server 'theia' not available occured again.

@yuqaf1989
Copy link

yuqaf1989 commented Jul 17, 2019

thx all @skabashnyuk @l0rd @synax We try to redeploy che base on multi-host mode again, and now theia serivce is OK. we have change the flowing configs:
1、coredns configmap add:

rewrite name regex (.*)\.my-nginx-nginx-ingress-controller\.default\.svc\.cluster\.local my-nginx-nginx-ingress-controller.default.svc.cluster.local

It seems that something wrong with default-host mode, and maybe not same to @synax 's problem.

@hjbbjh
Copy link
Author

hjbbjh commented Jul 19, 2019

@skabashnyuk @l0rd why default-host not work?Users must use our DNS Service in the multi-host mode,i want to avoid it,i want to deploy in default-host mode。

@sleshchenko
Copy link
Member

Here is a brief explanation of why single-host and default-host does not work #12971 (comment)
Maybe the errors described in this issue are caused by other issues, but I believe that there is described root cause and possible solutions for it that should be investigated more.

@metlos
Copy link
Contributor

metlos commented Sep 10, 2019

This is worth a respin now that #14189 has been addressed. @hjbbjh, the fix should be in the next nightly image of the che server.

@azatsarynnyy
Copy link
Member

I suppose it's fixed. Feel free to reopen in case it's reproduced.

@caiziqi33
Copy link

I also encountered the same problem:Error: Failed to run the workspace: "Server 'theia' in container 'theia-idegoz' not available."
The environment is built with reference to the official website doc minikube method,Can you specifically talk about the configuration of codedns,thankyou

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/editor/theia Issues related to the che-theia IDE of Che kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system. status/analyzing An issue has been proposed and it is currently being analyzed for effort and implementation approach
Projects
None yet
Development

No branches or pull requests