Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync job fails/retries itself after successfully transferring all the data. #5870

Closed
gui0506 opened this issue Sep 7, 2021 · 20 comments
Closed
Labels
area/connectors Connector related issues cdc frozen Not being actively worked on lang/java team/destinations Destinations team's backlog type/bug Something isn't working

Comments

@gui0506
Copy link

gui0506 commented Sep 7, 2021

Enviroment

  • Airbyte version: 0.29.13-alpha
  • OS Version / Instance: AWS EKS
  • Deployment: Kubernetes
  • Source Connector and version: Postgres 0.3.11
  • Destination Connector and version: BigQuery 0.1.1
  • Severity: Very Low / Low / Medium / High / Critical
  • Step where error happened: Deploy / Sync job / Setup new connection / Update connector / Upgrade Airbyte

Current Behavior

Sync job fails after successfully transferring all the data. The kubernetes pods are terminated gracefully with completed status.

Expected Behavior

Sync job should not fail after successfully transferring all data.

Logs

logs-185-0.txt

2021-09-06 23:56:25 ERROR () DefaultReplicationWorker(run):148 - Sync worker failed.
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: Cannot find pod while trying to retrieve exit code. This probably means the Pod was not correctly created.
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:140) ~[io.airbyte-airbyte-workers-0.29.12-alpha.jar:?]
at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:52) ~[io.airbyte-airbyte-workers-0.29.12-alpha.jar:?]
at io.airbyte.workers.temporal.TemporalAttemptExecution.lambda$getWorkerThread$2(TemporalAttemptExecution.java:146) ~[io.airbyte-airbyte-workers-0.29.12-alpha.jar:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
Suppressed: java.lang.RuntimeException: Cannot find pod while trying to retrieve exit code. This probably means the Pod was not correctly created.
at io.airbyte.workers.process.KubePodProcess.getReturnCode(KubePodProcess.java:548) ~[io.airbyte-airbyte-workers-0.29.12-alpha.jar:?]
at io.airbyte.workers.process.KubePodProcess.exitValue(KubePodProcess.java:573) ~[io.airbyte-airbyte-workers-0.29.12-alpha.jar:?]
at java.lang.Process.hasExited(Process.java:333) ~[?:?]
at java.lang.Process.isAlive(Process.java:323) ~[?:?]
at io.airbyte.workers.WorkerUtils.gentleCloseWithHeartbeat(WorkerUtils.java:111) ~[io.airbyte-airbyte-workers-0.29.12-alpha.jar:?]
at io.airbyte.workers.WorkerUtils.gentleCloseWithHeartbeat(WorkerUtils.java:95) ~[io.airbyte-airbyte-workers-0.29.12-alpha.jar:?]
at io.airbyte.workers.protocols.airbyte.DefaultAirbyteSource.close(DefaultAirbyteSource.java:126) ~[io.airbyte-airbyte-workers-0.29.12-alpha.jar:?]
at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:121) ~[io.airbyte-airbyte-workers-0.29.12-alpha.jar:?]
at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:52) ~[io.airbyte-airbyte-workers-0.29.12-alpha.jar:?]
at io.airbyte.workers.temporal.TemporalAttemptExecution.lambda$getWorkerThread$2(TemporalAttemptExecution.java:146) ~[io.airbyte-airbyte-workers-0.29.12-alpha.jar:?]
at java.lang.Thread.run(Thread.java:832) [?:?]

Steps to Reproduce

  1. Set up a connection with Postgres as the source (CDC) and BigQuery as the target
  2. Sync a large table (In my case, 10.18 GB | 37,656,941 records)
  3. Sometimes the sync will fail and retry even though everything was successful. (Not always. About 20% of the time)

Are you willing to submit a PR?

@gui0506 gui0506 added the type/bug Something isn't working label Sep 7, 2021
@sherifnada sherifnada added area/connectors Connector related issues lang/java labels Sep 10, 2021
@danieldiamond
Copy link
Contributor

danieldiamond commented Sep 12, 2021

@sherifnada also experiencing this randomly now on subsequent syncs
CDC configuration
Source: MySQL 0.4.4
Destination: Snowflake 0.3.14
EC2 Docker Airbyte instance 0.29.17-alpha

@sherifnada
Copy link
Contributor

Reading the logs, I can't see anything wrong with the sync itself which makes it look like a worker coordination issue.

@danieldiamond can you share logs from your failing instance?

cc @jrhizor do you have any ideas about what might be going on here?

@danieldiamond
Copy link
Contributor

logs-23-0.txt

looking back at these logs, they actually start with the error

The connector is trying to read binlog starting at binlog file 'mysql-bin-changelog.162732', pos=834769, skipping 2 events plus 1 rows, but this is no longer available on the server. Reconfigure the connector to use a snapshot when needed.

but the job continues ahead and appears to "succeed" whilst still causing a retry

@sherifnada
Copy link
Contributor

@danieldiamond what's the sync frequency on this one? this sounds like the sync waited "too long" and now the data is no longer in the binlog.

Alternatively, are you re-using the same CDC source in multiple connections, potentially having overlapping consumers of that binlog?

@danieldiamond
Copy link
Contributor

Screen Shot 2021-09-14 at 11 16 30 am

attached sync timestamps

  1. see sync frequency. it is manual, so i ran it once (full refresh) and then again 2 days later. i doubt the sync waited too long as that "successful but retried" sync took 8 minutes.
  2. only one CDC source for one connection. (in a seperate airbyte instance i have the same CDC source, with the same account but that connection is disabled and this new instance, with full-refresh sync on 09/10 worked fine)

separately. on point 2. I do have multiple connections with the same CDC source to the same destination. is this not allowed? I run these two multiple connections at the same time and they appeared to work as expected although I run into the "hanging" issue, where it doesnt actually COPY the data after reading it.

@sherifnada
Copy link
Contributor

I do have multiple connections with the same CDC source to the same destination. is this not allowed?

They shouldn't send into the same schema otherwise you'll get overwrites but it shouldn't cause the source connector to fail in either case.

@maofet
Copy link

maofet commented Feb 1, 2022

@sherifnada @karinakuz are there any plans to fix this?

@ShahNewazKhan
Copy link

ShahNewazKhan commented Feb 6, 2022

Seeing the same error with big query as a destination, the data is in the bq dataset, looks like dbt was also run, however seeing logs like this in the attempts marked as failed:

2022-02-05 22:15:34 INFO i.a.w.p.KubePodProcess(destroy):588 - Destroyed Kube process: destination-bigquery-sync-2-1-nemwd
2022-02-05 22:15:34 ERROR i.a.w.DefaultReplicationWorker(run):141 - Sync worker failed.
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: Cannot find pod while trying to retrieve exit code. This probably means the Pod was not correctly created.
	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396) ~[?:?]
	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073) ~[?:?]
	at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:134) ~[io.airbyte-airbyte-workers-0.35.10-alpha.jar:?]
	at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:52) ~[io.airbyte-airbyte-workers-0.35.10-alpha.jar:?]
	at io.airbyte.workers.temporal.TemporalAttemptExecution.lambda$getWorkerThread$2(TemporalAttemptExecution.java:174) ~[io.airbyte-airbyte-workers-0.35.10-alpha.jar:?]
	at java.lang.Thread.run(Thread.java:833) [?:?]
	Suppressed: java.lang.RuntimeException: Cannot find pod while trying to retrieve exit code. This probably means the Pod was not correctly created.```

@dermasmid
Copy link

I had the same thing with Klaviyo - MongoDb. here's the logs:

2022-02-07 06:33:05 source > Finished syncing SourceKlaviyo
2022-02-07 06:33:05 source > SourceKlaviyo runtimes:
2022-02-07 06:33:05 source > Finished syncing SourceKlaviyo
2022-02-07 06:33:35 destination > 2022-02-07 06:33:35 INFO i.a.i.b.FailureTrackingAirbyteMessageConsumer(close):63 - Airbyte message consumer: succeeded.
2022-02-07 06:33:35 destination > 2022-02-07 06:33:35 INFO i.a.i.d.m.MongodbRecordConsumer(close):88 - Migration finished with no explicit errors. Copying data from tmp tables to permanent
2022-02-07 06:34:35 INFO i.a.w.p.KubePodProcess(destroy):582 - Destroying Kube process: destination-mongodb-sync-55-0-ltnnp
2022-02-07 06:34:35 INFO i.a.w.p.KubePodProcess(destroy):588 - Destroyed Kube process: destination-mongodb-sync-55-0-ltnnp
2022-02-07 06:34:35 WARN i.a.c.i.LineGobbler(voidCall):86 - airbyte-destination gobbler IOException: Socket closed. Typically happens when cancelling a job.
2022-02-07 06:35:35 WARN i.a.w.WorkerUtils(closeProcess):56 - Process is still alive after calling destroy. Attempting to destroy forcibly...
2022-02-07 06:35:35 INFO i.a.w.p.KubePodProcess(destroy):582 - Destroying Kube process: destination-mongodb-sync-55-0-ltnnp
2022-02-07 06:35:35 INFO i.a.w.p.KubePodProcess(destroy):588 - Destroyed Kube process: destination-mongodb-sync-55-0-ltnnp
2022-02-07 06:35:35 ERROR i.a.w.DefaultReplicationWorker(run):158 - Sync worker failed.
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: Cannot find pod while trying to retrieve exit code. This probably means the Pod was not correctly created.

@Kopiczek
Copy link

Kopiczek commented Mar 2, 2022

Seeing the same with Zendesk on a medium size sync 70k records (although slow 1h41m).

2022-03-02 01:55:14 �[1;31mERROR�[m i.a.w.DefaultReplicationWorker(run):158 - Sync worker failed.
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: Cannot find pod while trying to retrieve exit code. This probably means the Pod was not correctly created.
	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396) ~[?:?]
	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073) ~[?:?]
	at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:151) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
	at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:56) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
	at io.airbyte.workers.temporal.TemporalAttemptExecution.lambda$getWorkerThread$2(TemporalAttemptExecution.java:171) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
	at java.lang.Thread.run(Thread.java:833) [?:?]
	Suppressed: java.lang.RuntimeException: Cannot find pod while trying to retrieve exit code. This probably means the Pod was not correctly created.
		at io.airbyte.workers.process.KubePodProcess.getReturnCode(KubePodProcess.java:676) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
		at io.airbyte.workers.process.KubePodProcess.exitValue(KubePodProcess.java:703) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
		at java.lang.Process.hasExited(Process.java:584) ~[?:?]
		at java.lang.Process.isAlive(Process.java:574) ~[?:?]
		at io.airbyte.workers.WorkerUtils.gentleClose(WorkerUtils.java:36) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
		at io.airbyte.workers.protocols.airbyte.DefaultAirbyteSource.close(DefaultAirbyteSource.java:128) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
		at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:125) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
		at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:56) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
		at io.airbyte.workers.temporal.TemporalAttemptExecution.lambda$getWorkerThread$2(TemporalAttemptExecution.java:171) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
		at java.lang.Thread.run(Thread.java:833) [?:?]
	Suppressed: io.airbyte.workers.WorkerException: Destination has not terminated . This warning is normal if the job was cancelled.
		at io.airbyte.workers.protocols.airbyte.DefaultAirbyteDestination.close(DefaultAirbyteDestination.java:119) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
		at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:125) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
		at io.airbyte.workers.DefaultReplicationWorker.run(DefaultReplicationWorker.java:56) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
		at io.airbyte.workers.temporal.TemporalAttemptExecution.lambda$getWorkerThread$2(TemporalAttemptExecution.java:171) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
		at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Cannot find pod while trying to retrieve exit code. This probably means the Pod was not correctly created.
	at io.airbyte.workers.DefaultReplicationWorker.lambda$getReplicationRunnable$5(DefaultReplicationWorker.java:295) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
	... 1 more
Caused by: java.lang.RuntimeException: Cannot find pod while trying to retrieve exit code. This probably means the Pod was not correctly created.
	at io.airbyte.workers.process.KubePodProcess.getReturnCode(KubePodProcess.java:676) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
	at io.airbyte.workers.process.KubePodProcess.exitValue(KubePodProcess.java:703) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
	at java.lang.Process.hasExited(Process.java:584) ~[?:?]
	at java.lang.Process.isAlive(Process.java:574) ~[?:?]
	at io.airbyte.workers.protocols.airbyte.DefaultAirbyteSource.isFinished(DefaultAirbyteSource.java:98) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
	at io.airbyte.workers.DefaultReplicationWorker.lambda$getReplicationRunnable$5(DefaultReplicationWorker.java:271) ~[io.airbyte-airbyte-workers-0.35.12-alpha.jar:?]
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
	... 1 more

also

io.airbyte.config.FailureReason@4c24adc2[failureOrigin=destination,failureType=<null>,internalMessage=java.lang.RuntimeException: java.io.UncheckedIOException: java.net.SocketException: Socket closed,externalMessage=Something went wrong within the destination connector,metadata=io.airbyte.config.Metadata@20271dbf[additionalProperties={attemptNumber=2, jobId=175005}],stacktrace=java.util.concurrent.CompletionException: java.lang.RuntimeException: java.io.UncheckedIOException: java.net.SocketException: Socket closed
	at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:315)
	at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:320)
	at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1807)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.RuntimeException: java.io.UncheckedIOException: java.net.SocketException: Socket closed
	at io.airbyte.workers.DefaultReplicationWorker.lambda$getDestinationOutputRunnable$6(DefaultReplicationWorker.java:325)
	at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)
	... 3 more
Caused by: java.io.UncheckedIOException: java.net.SocketException: Socket closed
	at java.base/java.io.BufferedReader$1.hasNext(BufferedReader.java:574)
	at java.base/java.util.Spliterators$IteratorSpliterator.tryAdvance(Spliterators.java:1855)
	at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.lambda$initPartialTraversalState$0(StreamSpliterators.java:292)
	at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:206)
	at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:169)
	at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:298)
	at java.base/java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
	at io.airbyte.workers.protocols.airbyte.DefaultAirbyteDestination.isFinished(DefaultAirbyteDestination.java:141)
	at io.airbyte.workers.DefaultReplicationWorker.lambda$getDestinationOutputRunnable$6(DefaultReplicationWorker.java:309)
	... 4 more
Caused by: java.net.SocketException: Socket closed
	at java.base/sun.nio.ch.NioSocketImpl.endRead(NioSocketImpl.java:248)
	at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:327)
	at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:350)
	at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:803)
	at java.base/java.net.Socket$SocketInputStream.read(Socket.java:966)
	at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:270)
	at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:313)
	at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:188)
	at java.base/java.io.InputStreamReader.read(InputStreamReader.java:177)
	at java.base/java.io.BufferedReader.fill(BufferedReader.java:162)
	at java.base/java.io.BufferedReader.readLine(BufferedReader.java:329)
	at java.base/java.io.BufferedReader.readLine(BufferedReader.java:396)
	at java.base/java.io.BufferedReader$1.hasNext(BufferedReader.java:571)

@lukeolson13
Copy link

FYI you should be able to fix this in current state by upping the SUCCESS_DATE_STR here from 2 hours to something more reasonable given your long syncs (ie 24 hours to be safe).
We're using Kubernetes so we accessed via kubectl edit configmap sweep-pod-script and then restarted the pod-sweeper, but this could look different depending on your deployment

@Kopiczek
Copy link

Kopiczek commented Mar 2, 2022

@lukeolson13 thanks, we'll give it a try by manually applying the fix from #10614
but I don't think it will work (we'll find out in 2 h)
The current script removes pods that were created 2h ago or later, our sync took 1h 40 min so it shouldn't be sweeped anyway. Unless there are more bugs in the sweeper code

@Kopiczek
Copy link

Kopiczek commented Mar 2, 2022

Another failure, it has to be something else that's deleting the pod. We're running on Kubernetes in GCP, maybe there is another default sweeper there

@alvaroqueiroz
Copy link
Contributor

i had this happeining in the different jobs.
the syncs took 13 and 15 hours. Can anyone confirm that changing the SUCCESS_DATE_STR solves the problem?

@alvaroqueiroz
Copy link
Contributor

problem is not solved by changing SUCCESS_DATE_STR or applying #10614

@Kopiczek
Copy link

@alvaroqueiroz for us it ended up being because of the GKE GC kicking in: #10934

@imrisadeh
Copy link

Facing the same issue and the second retry finished successfully but it cause duplications in the destination since the curser didn't update once the first job failed but successfully ingest the data... any estimation when this will get priority? or any suggestions for workaround?

@sivankumar86
Copy link
Contributor

sivankumar86 commented Aug 10, 2022

It seems, Facing same issue in latest version of airbyte (0.40.18) . I am using sql server (msssql) connector. Retry gets success and i am handling duplicate using dbt. job in snowflake

Error:

2022-08-10 22:17:50 INFO i.a.w.p.KubePodProcess(destroy):662 - (pod: product-analytics / source-mssql-read-16787-0-jgvhm) - Destroying Kube process.
2022-08-10 22:17:50 INFO i.a.w.p.KubePodProcess(close):717 - (pod: product-analytics / source-mssql-read-16787-0-jgvhm) - Closed all resources for pod
2022-08-10 22:17:50 INFO i.a.w.p.KubePodProcess(destroy):668 - (pod: product-analytics / source-mssql-read-16787-0-jgvhm) - Destroyed Kube process.
2022-08-10 22:18:50 INFO i.a.w.p.KubePodProcess(destroy):662 - (pod: product-analytics / destination-snowflake-write-16787-0-fgpdd) - Destroying Kube process.
2022-08-10 22:18:50 INFO i.a.w.p.KubePodProcess(close):717 - (pod: product-analytics / destination-snowflake-write-16787-0-fgpdd) - Closed all resources for pod
2022-08-10 22:18:50 WARN i.a.c.i.LineGobbler(voidCall):86 - airbyte-destination gobbler IOException: Socket closed. Typically happens when cancelling a job.
2022-08-10 22:18:50 INFO i.a.w.p.KubePodProcess(destroy):668 - (pod: product-analytics / destination-snowflake-write-16787-0-fgpdd) - Destroyed Kube process.
2022-08-10 22:18:50 ERROR i.a.w.g.DefaultReplicationWorker(run):181 - Sync worker failed.
java.util.concurrent.ExecutionException: io.airbyte.workers.general.DefaultReplicationWorker$SourceException: Source cannot be stopped!
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396) ~[?:?]
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073) ~[?:?]
at io.airbyte.workers.general.DefaultReplicationWorker.run(DefaultReplicationWorker.java:174) ~[io.airbyte-airbyte-workers-0.39.42-alpha.jar:?]
at io.airbyte.workers.general.DefaultReplicationWorker.run(DefaultReplicationWorker.java:65) ~[io.airbyte-airbyte-workers-0.39.42-alpha.jar:?]
at io.airbyte.workers.temporal.TemporalAttemptExecution.lambda$getWorkerThread$2(TemporalAttemptExecution.java:155) ~[io.airbyte-airbyte-workers-0.39.42-alpha.jar:?]

logs-16787.txt

@sivankumar86
Copy link
Contributor

I think, error is misleading . I increased CPU of pod then I donot see error message/attempt.

@bleonard bleonard added the frozen Not being actively worked on label Mar 22, 2024
@natikgadzhi
Copy link
Contributor

Should be resolved long ago.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/connectors Connector related issues cdc frozen Not being actively worked on lang/java team/destinations Destinations team's backlog type/bug Something isn't working
Projects
No open projects
Status: Backlog (unscoped)
Development

No branches or pull requests