Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nrpgx5 data-race #855

Closed
UnAfraid opened this issue Feb 23, 2024 · 5 comments · Fixed by #888
Closed

nrpgx5 data-race #855

UnAfraid opened this issue Feb 23, 2024 · 5 comments · Fixed by #888
Assignees
Labels

Comments

@UnAfraid
Copy link

UnAfraid commented Feb 23, 2024

Description

nrpgx5 tracer causes data-race

During integration test with testcontainers we observed data races even when new relic application is not initialized

cfg, err := pgxpool.ParseConfig("postgres://postgres:postgres@localhost:5432")
if err != nil {
    panic(err)
}

cfg.ConnConfig.Tracer = nrpgx5.NewTracer()
db, err := pgxpool.NewWithConfig(context.Background(), cfg)
if err != nil {
    panic(err)
}
==================
WARNING: DATA RACE
Write at 0x00c000504600 by goroutine 666:
  github.com/newrelic/go-agent/v3/integrations/nrpgx5.(*Tracer).TraceConnectStart()
      github.com/newrelic/go-agent/v3/integrations/nrpgx5/nrpgx5.go:122 +0xdc
  github.com/jackc/pgx/v5.connect()
      github.com/jackc/pgx/v5/conn.go:224 +0xa0
  github.com/jackc/pgx/v5.ConnectConfig()
      github.com/jackc/pgx/v5/conn.go:141 +0x114
  github.com/jackc/pgx/v5/pgxpool.NewWithConfig.func1()
      github.com/jackc/pgx/v5/pgxpool/pool.go:216 +0x1e0
  github.com/jackc/puddle/v2.(*Pool[go.shape.*uint8]).initResourceValue.func1()
      github.com/jackc/puddle/v2/pool.go:409 +0x130
@UnAfraid UnAfraid added the bug label Feb 23, 2024
@iamemilio
Copy link
Contributor

@UnAfraid can you confirm that this only happens when using nrpgx5, and not just with pgx5? Is this code block a reproducer?

cfg, err := pgxpool.ParseConfig("postgres://postgres:postgres@localhost:5432")
if err != nil {
    panic(err)
}

cfg.ConnConfig.Tracer = nrpgx5.NewTracer()
db, err := pgxpool.NewWithConfig(context.Background(), cfg)
if err != nil {
    panic(err)
}

@iamemilio iamemilio assigned iamemilio and unassigned iamemilio Mar 5, 2024
@UnAfraid
Copy link
Author

UnAfraid commented Mar 6, 2024

I have an integration test working with pgx5 for quite a while now and it works fine
The test uses testcontainers postgres 15

  1. awaits for the database to come up
  2. runs migrations with go-migrate (also using pgx5 driver)
  3. Then runs some queries in transaction like:
  • TX(select)
  • TX(create -> select)
  • TX(select for update -> update -> select)
  • TX(select for update -> delete -> select)

The data race started happening as soon as i added cfg.ConnConfig.Tracer = nrpgx5.NewTracer()

@iamemilio
Copy link
Contributor

Thanks! We will spend some time diving into this then.

@WillAbides
Copy link

I ran into the same issue. It happens because when you follow the example in this repo and assign the tracer with cfg.ConnConfig.Tracer = nrpgx5.NewTracer(), then every connection in the pool gets the same tracer instance. As you've seen this tracer isn't concurrency safe because it doesn't guard against concurrent access to BaseSegment

I got around this by using cfg.BeforeConnect to set the tracer before connecting. Here's what my code looks like:

func NewPgxPool(ctx context.Context, dbURL string) (*pgxpool.Pool, error) {
	cfg, err := pgxpool.ParseConfig(dbURL)
	if err != nil {
		return nil, err
	}

	cfg.BeforeConnect = func(_ context.Context, config *pgx.ConnConfig) error {
		config.Tracer = nrpgx5.NewTracer()
		return nil
	}

	return pgxpool.NewWithConfig(ctx, cfg)
}

iamemilio added a commit to iamemilio/go-agent that referenced this issue Mar 15, 2024
@iamemilio
Copy link
Contributor

Thanks @WillAbides. This seems to be in line with how the documentation recommends using pgx5 pool in threaded contexts. We will update the documentation on our end and credit you in the commit!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants