Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use QueryRow select 1 row tips:clickhouse [Decode]: live_data row 0: read length: size 11015713 too big (10485760 is maximum #867

Closed
liyuan1206010227 opened this issue Jan 10, 2023 · 3 comments
Labels
stale requires a follow-up

Comments

@liyuan1206010227
Copy link

liyuan1206010227 commented Jan 10, 2023

Version information:
github.com/ClickHouse/clickhouse-go/v2 v2.4.3
golang 1.19
my code :

dsn := helpers.StringFormatByArray("%s:%s", []string{
		viper.GetString("clickhouse.host"),
		viper.GetString("clickhouse.port"),
	})
	CDB = clickhouse.OpenDB(&clickhouse.Options{
		Addr: []string{dsn},
		Auth: clickhouse.Auth{
			Database: viper.GetString("clickhouse.dbname"),
			Username: viper.GetString("clickhouse.username"),
			Password: viper.GetString("clickhouse.password"),
		},
		//TLS: &tls.Config{
		//	InsecureSkipVerify: false,
		//},
		Settings: clickhouse.Settings{
			"max_execution_time":                 60,
			"max_memory_usage":                   4000000000,
			"max_query_size":                     1000000000,
			"max_block_size":                     1000000000,
			"max_bytes_before_external_group_by": 20000000000,
		},
		DialTimeout: 10 * time.Second,
		Compression: &clickhouse.Compression{
			Method: clickhouse.CompressionLZ4,
		},
		Debug:           true,
		Protocol:        clickhouse.HTTP,
		BlockBufferSize: 10,
	})
var gpLiveDataRes resgp.GpLiveData
	querySql := fmt.Sprintf("SELECT * FROM %s WHERE tournament_id = $1 AND match_id = $2 AND gp_live_match_id = $3 ORDER BY offset_num DESC LIMIT ", where.TableName())
	if err := repositories.GetCDb(s.Tx).QueryRow(
		querySql,
		where.TournamentId,
		where.MatchId,
		where.GpLiveMatchId,
	).Scan(
		&gpLiveDataRes.TournamentId,
		&gpLiveDataRes.MatchId,
		&gpLiveDataRes.GpLiveMatchId,
		&gpLiveDataRes.OffsetNum,
		&gpLiveDataRes.LiveDataNum,
		&gpLiveDataRes.LiveData,
		&gpLiveDataRes.TimeStamps,
		&gpLiveDataRes.Dateline,
	); err != nil {
		return gpLiveDataRes, err
	}
	return gpLiveDataRes, nil

How to lift this restriction

@liyuan1206010227 liyuan1206010227 changed the title QueryRow select 1 row tips:clickhouse [Decode]: live_data row 0: read length: size 11015713 too big (10485760 is maximum use QueryRow select 1 row tips:clickhouse [Decode]: live_data row 0: read length: size 11015713 too big (10485760 is maximum Jan 10, 2023
@jkaflik
Copy link
Contributor

jkaflik commented Jan 17, 2023

Hi @liyuan1206010227
First of all, sorry for the late catch-up on this issue.
Can you please give me more details on this issue? I guess issue title is an error message you get when QueryRow is called. Is that correct?

@jkaflik jkaflik added the stale requires a follow-up label Feb 9, 2023
@et
Copy link
Contributor

et commented Mar 28, 2023

Hey I hit this issue and it occurs when you have a column with a lot of data in it. You can bump the size of MaxCompressionBuffer to fix the read problem but realistically, a more practical fix is to truncate the data to something reasonable before writing it to clickhouse.

I've also discovered that clickhouse generally doesn't do well with large columns (see ClickHouse/ClickHouse#7187) so bumping MaxCompressionBuffer isn't really going to solve your issue long term.

Not sure what @liyuan1206010227's data looks like but &gpLiveDataRes.LiveData looks like it's probably their issue (just compared to the other column names) and they should consider what data they're storing in it.

@jkaflik
Copy link
Contributor

jkaflik commented Aug 18, 2023

Fixed in #1071

@jkaflik jkaflik closed this as completed Aug 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale requires a follow-up
Projects
None yet
Development

No branches or pull requests

3 participants