Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solr operator going in CrashLoopBackOff state #41

Closed
deepak71 opened this issue Oct 30, 2019 · 21 comments
Closed

Solr operator going in CrashLoopBackOff state #41

deepak71 opened this issue Oct 30, 2019 · 21 comments

Comments

@deepak71
Copy link

deepak71 commented Oct 30, 2019

Describe the bug
While deploying the Solr operator on our K8s cluster environment, operator pod is failing with CrashLoopBackOff error.

To Reproduce
Steps to reproduce the behavior:

  1. Installed the Zookeeper operator first, followed by Solr CRDs & Operator installation
  2. Solr operator is failing (Zookeeper operator is running)
  3. Detailed logs are below:

NAME                                READY   STATUS             RESTARTS   AGE
pod/solr-operator-b658b985b-z87h7   0/1     CrashLoopBackOff   28         126m
pod/zk-operator-5c769d7747-w2jv8    1/1     Running            0          130m

---kubectl logs pod/solr-operator-b658b985b-z87h7
{"level":"info","ts":1572414608.8584437,"logger":"entrypoint","msg":"solr-operator Version: 0.2.1"}
{"level":"info","ts":1572414608.8585217,"logger":"entrypoint","msg":"solr-operator Git SHA: "}
{"level":"info","ts":1572414608.8585274,"logger":"entrypoint","msg":"Go Version: go1.12.5"}
{"level":"info","ts":1572414608.8585312,"logger":"entrypoint","msg":"Go OS/Arch: linux / amd64"}
{"level":"info","ts":1572414608.8585346,"logger":"entrypoint","msg":"setting up client for manager"}
{"level":"info","ts":1572414608.859087,"logger":"entrypoint","msg":"setting up manager"}
{"level":"info","ts":1572414609.1637375,"logger":"entrypoint","msg":"Registering Components."}
{"level":"info","ts":1572414609.1637914,"logger":"entrypoint","msg":"setting up scheme"}
{"level":"info","ts":1572414609.255684,"logger":"entrypoint","msg":"Setting up controller"}
{"level":"info","ts":1572414609.2559223,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrbackup-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.25624,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrbackup-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2564533,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2565868,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.256682,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2567837,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2568824,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2569659,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2570605,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.257194,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcollection-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.25732,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrprometheusexporter-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2574298,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrprometheusexporter-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2574573,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrprometheusexporter-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2574737,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrprometheusexporter-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2574875,"logger":"entrypoint","msg":"setting up webhooks"}
{"level":"info","ts":1572414609.257492,"logger":"entrypoint","msg":"Starting the Cmd."}

Expected behavior
As per getting started steps, solr operator should be running once we deploy Zookeeper operator(optional) but solr operator is failing to start.

Environment (please complete the following information):

  • Operating System and Version (K8s Nodes): CoreOS 2191.5.0 (Rhyolite)
  • K8s Version: 1.14
  • Namespace: default
  • Permission: admin
@sepulworld
Copy link
Contributor

Can you share a kubectl describe pod and describe the deploy as well. Also, same behavior if you delete the solr-operator pod and let it come back?

@HoustonPutman
Copy link
Contributor

The Solr Operator version it prints out is wrong, and is an issue that I need to fix. What version of the container are you using?

@swarupdonepudi
Copy link
Contributor

Might be solr-operator is getting OOMKilled. Happened in our case.

@deepak71
Copy link
Author

deepak71 commented Oct 31, 2019

Yes, operator pod is getting OOMKilled, but we've enough resources on nodes. Also, its same behavior if you delete the solr-operator pod.
We're not passing any custom parameter, just followed the Readme like first deployed the Zookeeper operator then solr operator.

k8 describe pod solr-operator-5c57549cb5-mkpxj -n solr
Name:               solr-operator-5c57549cb5-mkpxj
Namespace:          solr
Priority:           0
PriorityClassName:  <none>
Node:               k8sstp-kube-worker-lx04/10.190.67.166
Start Time:         Thu, 31 Oct 2019 10:14:35 +0530
Labels:             control-plane=solr-operator
                    controller-tools.k8s.io=1.0
                    pod-template-hash=5c57549cb5
Annotations:        kubernetes.io/psp: default
                    prometheus.io/scrape: true
                    seccomp.security.alpha.kubernetes.io/pod: docker/default
Status:             Running
IP:                 10.244.8.43
Controlled By:      ReplicaSet/solr-operator-5c57549cb5
Containers:
  solr-operator:
    Container ID:  docker://c678f704d35d6b09dade2e97b04fe1640256da4db34b96f8da9c32cbcb2c312e
    Image:         registry.mckinsey.com/solr-operator:0.1.3
    Image ID:      docker-pullable://registry.mckinsey.com/solr-operator@sha256:299f7fb8e188709046f22d0a12773e5790fbfe2bbc8dd8021697c716cb9e5764
    Ports:         8080/TCP, 9876/TCP
    Host Ports:    0/TCP, 0/TCP
    Args:
      -zk-operator=true
      -etcd-operator=false
    State:          Running
      Started:      Thu, 31 Oct 2019 10:18:08 +0530
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Thu, 31 Oct 2019 10:16:31 +0530
      Finished:     Thu, 31 Oct 2019 10:16:33 +0530
    Ready:          True
    Restart Count:  5
    Limits:
      cpu:     100m
      memory:  30Mi
    Requests:
      cpu:     100m
      memory:  20Mi
    Environment:
      POD_NAMESPACE:  solr (v1:metadata.namespace)
      SECRET_NAME:    webhook-server-secret
    Mounts:
      /tmp/cert from cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-x4hbt (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  webhook-server-secret
    Optional:    false
  default-token-x4hbt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-x4hbt
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                    From                              Message
  ----     ------     ----                   ----                              -------
  Normal   Scheduled  3m42s                  default-scheduler                 Successfully assigned solr/solr-operator-5c57549cb5-mkpxj to k8sstp-kube-worker-lx04
  Normal   Started    2m43s (x4 over 3m45s)  kubelet, k8sstp-kube-worker-lx04  Started container solr-operator
  Warning  BackOff    2m14s (x6 over 3m33s)  kubelet, k8sstp-kube-worker-lx04  Back-off restarting failed container
  Normal   Pulling    2m (x5 over 3m49s)     kubelet, k8sstp-kube-worker-lx04  Pulling image "registry.mckinsey.com/solr-operator:0.1.3"
  Normal   Pulled     119s (x5 over 3m49s)   kubelet, k8sstp-kube-worker-lx04  Successfully pulled image "registry.mckinsey.com/solr-operator:0.1.3"
  Normal   Created    119s (x5 over 3m49s)   kubelet, k8sstp-kube-worker-lx04  Created container solr-operator

@deepak71
Copy link
Author

After increasing the memory in resources section, it started working.

Thank you for your support.

@swarupdonepudi
Copy link
Contributor

swarupdonepudi commented Oct 31, 2019

haha I was about to paste this - https://github.com/bloomberg/solr-operator/blob/master/config/operators/solr_operator.yaml#L453

Cool. You figured it.

@deepak71
Copy link
Author

Did you face the problem in next step?

kubectl apply -f example/test_solrcloud.yaml -n solr
solrcloud.solr.bloomberg.com/example created

kubectl get solrcloud example  -n solr
NAME      VERSION   TARGETVERSION   DESIREDNODES   NODES   READYNODES   AGE
example   8.2.0                     4              0       0            21m

kubectl describe solrcloud example  -n solr
Name:         example
Namespace:    solr
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"solr.bloomberg.com/v1beta1","kind":"SolrCloud","metadata":{"annotations":{},"name":"example","namespace":"solr"},"spec":{"r...
API Version:  solr.bloomberg.com/v1beta1
Kind:         SolrCloud
Metadata:
  Creation Timestamp:  2019-10-31T06:48:59Z
  Generation:          2
  Resource Version:    5719182
  Self Link:           /apis/solr.bloomberg.com/v1beta1/namespaces/solr/solrclouds/example
  UID:                 84583580-fbaa-11e9-9ebd-0050569bab1a
Spec:
  Busy Box Image:
    Pull Policy:  IfNotPresent
    Repository:   library/busybox
    Tag:          1.28.0-glibc
  Replicas:       4
  Solr Image:
    Pull Policy:  IfNotPresent
    Repository:   library/solr
    Tag:          8.2.0
  Solr Java Mem:  -Xms500m -Xmx1g
  Zookeeper Ref:
    Provided:
      Zetcd:  <nil>
      Zookeeper:
        Image:
          Pull Policy:  IfNotPresent
          Repository:   emccorp/zookeeper
          Tag:          3.5.4-beta-operator
        Persistent Volume Claim Spec:
          Access Modes:
            ReadWriteOnce
          Data Source:  <nil>
          Resources:
            Requests:
              Storage:  5Gi
        Replicas:       3
Status:
  Backup Restore Ready:     false
  Internal Common Address:  http://example-solrcloud-common.solr
  Ready Replicas:           0
  Replicas:                 0
  Solr Nodes:
  Version:  8.2.0
  Zookeeper Connection Info:
    Chroot:  /
Events:      <none>

@HoustonPutman
Copy link
Contributor

I don't think it's able to create the Zookeeper instance. You have to have a Persistent Volume that your Zookeeper Persistent Volume Claim Spec can map to.

The Zookeeper Operator does not support non-persistent storage yet. pravega/zookeeper-operator#64

@deepak71
Copy link
Author

deepak71 commented Oct 31, 2019

I tried to give the storageclass name to provision dynamic volumes. In which section we can pass the volume detail for zookeeper and solr itself in test_solrlcloud.yaml file.
How we can map the storage class name and size while creating a solr cluster? Below was attempted to fix it:

apiVersion: solr.bloomberg.com/v1beta1
kind: SolrCloud
metadata:
  name: example
spec:
  replicas: 4
  solrImage:
    tag: 8.2.0
  solrJavaMem: "-Xms500m -Xmx1g"
  solrOpts: "-Dsolr.autoSoftCommit.maxTime=10000"
  solrGCTune: "-XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8"
  Zookeeper Ref:
    Provided:
      Persistent Volume Claim Spec:
                storageClassName: "openebs-cstor"
                Access Modes:
                  ReadWriteOnce
                Resources:
                  Requests:
                    Storage:  100Mi

@HoustonPutman
Copy link
Contributor

your ZookeeperRef is missing a level:

Zookeeper Ref:
    Provided:
        Zookeeper:
            Persistent Volume Claim Spec:
                storageClassName: "openebs-cstor"
                Access Modes:
                  ReadWriteOnce
                Resources:
                  Requests:
                    Storage:  100Mi

@deepak71
Copy link
Author

deepak71 commented Nov 1, 2019

Even that's(Zookeeper level added) not helping. Also I tried to pass the Persistent Volume Claim Spec after resource SolrCloud deployed.

Zookeeper is lunching with default specification, its not picking custom values. Anything I'm missing.

@sepulworld
Copy link
Contributor

Im pretty sure the yaml input is case sensitive. Also, there are spaces in your spec field names. Let me show you something that works... one sec

@sepulworld
Copy link
Contributor

sepulworld commented Nov 1, 2019

Example YAML that requests persistentVolumeClaimSpec of Zookeeper. (Note, case and spacing matter in the yaml spec).

apiVersion: solr.bloomberg.com/v1beta1
kind: SolrCloud
metadata:
  name: example
spec:
  replicas: 3
  solrImage:
    tag: 8.2.0
  solrJavaMem: "-Xms1g -Xmx3g"
  solrPodPolicy:
    resources:
      limits:
        memory: "1G"
      requests:
        cpu: "65m"
        memory: "156Mi"
  zookeeperRef:
    provided:
      zookeeper:
        persistentVolumeClaimSpec:
          storageClassName: "hostpath"
          resources:
            requests:
              storage: "5Gi"
        replicas: 3
        zookeeperPodPolicy:
          resources:
            limits:
              memory: "1G"
            requests:
              cpu: "65m"
              memory: "156Mi"
  solrOpts: "-Dsolr.autoSoftCommit.maxTime=10000"
  solrGCTune: "-XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8"

Verified statefulset for ZK applied the 5Gi storage request.

@deepak71
Copy link
Author

deepak71 commented Nov 2, 2019

Thanks for providing the correct yaml. It moved on but its now failing in Zookeeper cluster recouncilations.

k8 get pods
NAME                             READY   STATUS             RESTARTS   AGE
example-solrcloud-0              0/1     CrashLoopBackOff   8          17m
example-solrcloud-zookeeper-0    1/1     Running            0          17m
example-solrcloud-zookeeper-1    0/1     CrashLoopBackOff   8          16m

k8 describe pod example-solrcloud-zookeeper-1
Name:               example-solrcloud-zookeeper-1
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               k8sstp-kube-worker-lx04/10.190.67.166
Start Time:         Sun, 03 Nov 2019 01:11:02 +0530
Labels:             app=example-solrcloud-zookeeper
                    controller-revision-hash=example-solrcloud-zookeeper-6f9f446cc8
                    kind=ZookeeperMember
                    statefulset.kubernetes.io/pod-name=example-solrcloud-zookeeper-1
Annotations:        kubernetes.io/psp: default
                    seccomp.security.alpha.kubernetes.io/pod: docker/default
Status:             Running
IP:                 10.244.8.139
Controlled By:      StatefulSet/example-solrcloud-zookeeper
Containers:
  zookeeper:
    Container ID:  docker://db56e467f2d8c9ed296022a2a75dde7f80cb202c388f295a0689fce96110dec2
    Image:         emccorp/zookeeper:3.5.4-beta-operator
    Image ID:      docker-pullable://emccorp/zookeeper@sha256:c4656ca1e0103b1660a978fa8b2bfecfae5baf6746bb1839360821abd209082f
    Ports:         2181/TCP, 2888/TCP, 3888/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Command:
      /usr/local/bin/zookeeperStart.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 03 Nov 2019 01:27:02 +0530
      Finished:     Sun, 03 Nov 2019 01:27:02 +0530
    Ready:          False
    Restart Count:  8
    Liveness:       exec [zookeeperLive.sh] delay=10s timeout=10s period=10s #success=1 #failure=3
    Readiness:      exec [zookeeperReady.sh] delay=10s timeout=10s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /conf from conf (rw)
      /data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xrqn6 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-example-solrcloud-zookeeper-1
    ReadOnly:   false
  conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      example-solrcloud-zookeeper-configmap
    Optional:  false
  default-token-xrqn6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xrqn6
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                  From                              Message
  ----     ------                  ----                 ----                              -------
  Warning  FailedScheduling        17m                  default-scheduler                 pod has unbound immediate PersistentVolumeClaims (repeated 4 times)
  Normal   Scheduled               17m                  default-scheduler                 Successfully assigned default/example-solrcloud-zookeeper-1 to k8sstp-kube-worker-lx04
  Normal   SuccessfulAttachVolume  17m                  attachdetach-controller           AttachVolume.Attach succeeded for volume "pvc-b2e20492-fda8-11e9-a7f3-0050569b2667"
  Normal   Pulling                 16m (x4 over 16m)    kubelet, k8sstp-kube-worker-lx04  Pulling image "emccorp/zookeeper:3.5.4-beta-operator"
  Normal   Pulled                  16m (x4 over 16m)    kubelet, k8sstp-kube-worker-lx04  Successfully pulled image "emccorp/zookeeper:3.5.4-beta-operator"
  Normal   Created                 16m (x4 over 16m)    kubelet, k8sstp-kube-worker-lx04  Created container zookeeper
  Normal   Started                 16m (x4 over 16m)    kubelet, k8sstp-kube-worker-lx04  Started container zookeeper
  Warning  BackOff                 102s (x78 over 16m)  kubelet, k8sstp-kube-worker-lx04  Back-off restarting failed container

k8 logs example-solrcloud-zookeeper-1
+ source /conf/env.sh
++ DOMAIN=example-solrcloud-zookeeper-headless.default.svc.cluster.local
++ QUORUM_PORT=2888
++ LEADER_PORT=3888
++ CLIENT_HOST=example-solrcloud-zookeeper-client
++ CLIENT_PORT=2181
+ source /usr/local/bin/zookeeperFunctions.sh
++ set -ex
++ hostname -s
+ HOST=example-solrcloud-zookeeper-1
+ DATA_DIR=/data
+ MYID_FILE=/data/myid
+ LOG4J_CONF=/conf/log4j-quiet.properties
+ DYNCONFIG=/data/zoo.cfg.dynamic
+ [[ example-solrcloud-zookeeper-1 =~ (.*)-([0-9]+)$ ]]
+ NAME=example-solrcloud-zookeeper
+ ORD=1
+ MYID=2
+ WRITE_CONFIGURATION=true
+ REGISTER_NODE=true
+ '[' -f /data/myid ']'
+ set +e
+ nslookup example-solrcloud-zookeeper-headless.default.svc.cluster.local

nslookup: can't resolve '(null)': Name does not resolve
Name:      example-solrcloud-zookeeper-headless.default.svc.cluster.local
Address 1: 10.244.5.214 10-244-5-214.example-solrcloud-zookeeper-client.default.svc.cluster.local
+ [[ 0 -eq 1 ]]
+ set -e
+ set +e
++ zkConnectionString
++ set +e
++ nslookup example-solrcloud-zookeeper-client
++ [[ 0 -eq 1 ]]
++ set -e
++ echo example-solrcloud-zookeeper-client:2181
+ ZKURL=example-solrcloud-zookeeper-client:2181
+ set -e
++ java -Dlog4j.configuration=file:/conf/log4j-quiet.properties -jar /root/zu.jar get-all example-solrcloud-zookeeper-client:2181
Error: Unable to access jarfile /root/zu.jar
+ CONFIG=

@sepulworld
Copy link
Contributor

sepulworld commented Nov 2, 2019 via email

@HoustonPutman
Copy link
Contributor

Please re-open if you are still having issues.

@chaman53
Copy link

solr 15:53:01.84
solr 15:53:01.84 Welcome to the Bitnami solr container
solr 15:53:01.84 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-solr
solr 15:53:01.84 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-solr/issues
solr 15:53:01.84
solr 15:53:01.84 INFO ==> ** Starting solr setup **
solr 15:53:01.86 INFO ==> Validating settings in SOLR_* env vars...
solr 15:53:01.86 ERROR ==> Not enough nodes for the replicas and shards indicated

my conf is below
solr:

Please see all available overrides at https://github.com/bitnami/charts/tree/master/bitnami/solr/#installing-the-chart

solr.enabled -- Flag to control whether to deploy SOLR

enabled: true
auth:

solr.auth.enabled -- Enable or disable auth (if auth is disabled solr-init cant upload the configset/schema.xml for ckan)

enabled: true

solr.auth.adminUser -- The name of the solr admin user

adminUsername: admin

solr.auth.adminPassword -- The password of the solr admin user

adminPassword: pass

solr.collection -- the name of the collection created by solr

since we are creating one with solr-init this needs to be blank

collection:

solr.collectionShards -- Number of shards for the SOLR collection

collectionShards:

solr.collectionReplicas -- Number of replicas for each SOLR shard

collectionReplicas:

solr.fullnameOverride -- Name override for the SOLR deployment

fullnameOverride: *SolrName

solr.replicaCount -- Number of SOLR instances in the cluster

replicaCount: 1
volumeClaimTemplates:
# solr.volumeClaimTemplates.storageSize -- Size of Solr PVC
storageSize: 5Gi
image:
registry: docker.io
# solr.image.repository -- Repository for the SOLR image
repository: bitnami/solr
# solr.image.tag -- Tag for the SOLR image
tag: 8.11.1
zookeeper:
# solr.zookeeper.replicaCount -- Numer of Zookeeper replicas in the ZK cluster
replicaCount: 1
persistence:
# solr.zookeeper.persistence.size -- Size of ZK PVC
size: 1Gi
initialize:
# solr.initialize.enabled -- Flag whether to initialize the SOLR instance with the provided collection name
enabled: true
# solr.initialize.numShards -- Number of shards for the SOLR collection
numShards: 2
# solr.initialize.replicationFactor -- Number of replicas for each SOLR shard
replicationFactor: 1
# solr.initialize.maxShardsPerNode -- Maximum shards per node
maxShardsPerNode: 10
# solr.initialize.configsetName -- Name of the config set used for initializing
configsetName: ckanConfigSet

@HoustonPutman
Copy link
Contributor

This is a long closed issue, and had to do with the solr operator crashing. You are using the bitnami Solr helm chart, completely unrelated to this project and an unofficial (and unsupported) way of running Solr on kubernetes. Please ask bitnami for help or start using the Solr Operator.

@chaman53
Copy link

chaman53 commented Nov 28, 2023 via email

@chaman53
Copy link

chaman53 commented Nov 28, 2023 via email

@HoustonPutman
Copy link
Contributor

Please go to https://solr.apache.org/operator/resources.html for more information. I don't know where that error is coming from, but it's not ours.

We have other discussion channels, this is not the place to ask this, especially on an unrelated, long-closed issue.

@HoustonPutman HoustonPutman closed this as not planned Won't fix, can't repro, duplicate, stale Nov 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants