Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

statistics: do not depend on table information when calculating the table size #56036

Merged

Conversation

Rustin170506
Copy link
Member

@Rustin170506 Rustin170506 commented Sep 12, 2024

What problem does this PR solve?

Issue Number: ref #55906

Problem Summary:

What changed and how does it work?

In this pull request, I attempted to utilize table statistics to obtain the column number instead of relying on the table information schema. This approach would eliminate the need to retrieve the table information schema when updating the analysis job based on the new table row count.

See: https://github.com/pingcap/tidb/pull/55889/files#r1756288302

We only need ColNum here because every time we create a table or a new column, we will also create a histogram record for it. After that, we will load it into memory(If it has udpate). So, usually, it is the same as the column number from the table information schema.

Check List

Tests

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

Please refer to Release Notes Language Style Guide to write a quality release note.

None

@ti-chi-bot ti-chi-bot bot added release-note-none Denotes a PR that doesn't merit a release note. sig/planner SIG: Planner size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Sep 12, 2024
@Rustin170506 Rustin170506 changed the title statistics: do not depend on table information when calculating the t… statistics: do not depend on table information when calculating the table size Sep 12, 2024
Copy link

codecov bot commented Sep 12, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 57.0992%. Comparing base (00aac17) to head (88b71bb).
Report is 29 commits behind head on master.

Additional details and impacted files
@@                Coverage Diff                @@
##             master     #56036         +/-   ##
=================================================
- Coverage   72.9454%   57.0992%   -15.8462%     
=================================================
  Files          1604       1761        +157     
  Lines        446749     635891     +189142     
=================================================
+ Hits         325883     363089      +37206     
- Misses       100805     248139     +147334     
- Partials      20061      24663       +4602     
Flag Coverage Δ
integration 39.8043% <0.0000%> (?)
unit 72.3390% <100.0000%> (+0.2793%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
dumpling 52.9567% <ø> (ø)
parser ∅ <ø> (∅)
br 61.0706% <ø> (+15.2993%) ⬆️

@Rustin170506
Copy link
Member Author

Tested locally:

  1. Start the TiDB cluster: tiup playground v8.2.0 --db.binpath /Users/rustin/code/tidb/bin/tidb-server
  2. Create some tables and insert data:
#!/usr/bin/env -S cargo +nightly -Zscript
---cargo
[dependencies]
clap = { version = "4.2", features = ["derive"] }
sqlx = { version = "0.7", features = ["runtime-tokio-rustls", "mysql"] }
tokio = { version = "1", features = ["full"] }
fake = { version = "2.5", features = ["derive"] }
---

use clap::Parser;
use fake::{Fake, Faker};
use sqlx::mysql::MySqlPoolOptions;

#[derive(Parser, Debug)]
#[clap(version)]
struct Args {
    #[clap(short, long, help = "MySQL connection string")]
    database_url: String,
}

#[derive(Debug)]
struct TableRow {
    id: i64,
    column1: String,
    column2: i32,
    column3: i32,
    column4: String,
}

#[tokio::main]
async fn main() -> Result<(), sqlx::Error> {
    let args = Args::parse();

    let pool = MySqlPoolOptions::new()
        .max_connections(5)
        .connect(&args.database_url)
        .await?;

    // Create 20 tables
    for i in 0..20 {
        let table_name = format!("t{}", i);
        let create_table_query = format!(
            "CREATE TABLE IF NOT EXISTS {} (
                id BIGINT NOT NULL PRIMARY KEY,
                column1 VARCHAR(255) NOT NULL,
                column2 INT NOT NULL,
                column3 INT NOT NULL,
                column4 VARCHAR(255) NOT NULL,
                INDEX idx_column1 (column1)
            )",
            table_name
        );

        sqlx::query(&create_table_query)
            .execute(&pool)
            .await?;

        println!("Created table: {}", table_name);

        // Insert 3000 rows into each table
        for _ in 0..3000 {
            let row = TableRow {
                id: Faker.fake::<i64>(),
                column1: Faker.fake::<String>(),
                column2: Faker.fake::<i32>(),
                column3: Faker.fake::<i32>(),
                column4: Faker.fake::<String>(),
            };

            let insert_query = format!(
                "INSERT INTO {} (id, column1, column2, column3, column4)
                VALUES (?, ?, ?, ?, ?)",
                table_name
            );

            sqlx::query(&insert_query)
                .bind(row.id)
                .bind(&row.column1)
                .bind(row.column2)
                .bind(row.column3)
                .bind(&row.column4)
                .execute(&pool)
                .await?;
        }

        println!("Successfully inserted 3000 rows into table '{}'.", table_name);
    }

    Ok(())
}
  1. Check logs:
[2024/09/12 15:37:48.265 +08:00] [INFO] [refresher.go:126] ["Auto analyze triggered"] [category=stats] [job="NonPartitionedTableAnalysisJob:\n\tAnalyzeType: analyzeTable\n\tIndexes: \n\tSchema: test\n\tTable: t9\n\tTableID: 122\n\tTableStatsVer: 2\n\tChangePercentage: 1.000000\n\tTableSize: 15000.00\n\tLastAnalysisDuration: 30m0s\n\tWeight: 1.376307\n"]

As you can see the table size is 5 * 3000 = 15000.00.

Signed-off-by: Rustin170506 <29879298+Rustin170506@users.noreply.github.com>
@ti-chi-bot ti-chi-bot bot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Sep 12, 2024
@Rustin170506
Copy link
Member Author

For partitioned tables:

  1. Start the TiDB cluster: tiup playground v8.2.0 --db.binpath /Users/rustin/code/tidb/bin/tidb-server
  2. Create some tables and insert data:
#!/usr/bin/env -S cargo +nightly -Zscript
---cargo
[dependencies]
clap = { version = "4.2", features = ["derive"] }
sqlx = { version = "0.7", features = ["runtime-tokio-rustls", "mysql"] }
tokio = { version = "1", features = ["full"] }
fake = { version = "2.5", features = ["derive"] }
---

use clap::Parser;
use fake::{Fake, Faker};
use sqlx::mysql::MySqlPoolOptions;

#[derive(Parser, Debug)]
#[clap(version)]
struct Args {
    #[clap(short, long, help = "MySQL connection string")]
    database_url: String,
}

#[derive(Debug)]
struct TableRow {
    id: i64,
    partition_key: u32,
    column1: String,
    column2: i32,
    column3: i32,
    column4: String,
}

#[tokio::main]
async fn main() -> Result<(), sqlx::Error> {
    let args = Args::parse();

    let pool = MySqlPoolOptions::new()
        .max_connections(5)
        .connect(&args.database_url)
        .await?;

    // Create partitioned table if not exists
    sqlx::query(
        "CREATE TABLE IF NOT EXISTS t (
            id BIGINT NOT NULL,
            partition_key INT NOT NULL,
            column1 VARCHAR(255) NOT NULL,
            column2 INT NOT NULL,
            column3 INT NOT NULL,
            column4 VARCHAR(255) NOT NULL,
            PRIMARY KEY (id, partition_key),
            index idx_column1 (column1)
        ) PARTITION BY RANGE (partition_key) (
            PARTITION p0 VALUES LESS THAN (3000),
            PARTITION p1 VALUES LESS THAN (6000),
            PARTITION p2 VALUES LESS THAN (9000),
            PARTITION p3 VALUES LESS THAN (12000),
            PARTITION p4 VALUES LESS THAN (15000),
            PARTITION p5 VALUES LESS THAN (18000),
            PARTITION p6 VALUES LESS THAN (21000),
            PARTITION p7 VALUES LESS THAN (24000),
            PARTITION p8 VALUES LESS THAN (27000),
            PARTITION p9 VALUES LESS THAN (30000),
            PARTITION p10 VALUES LESS THAN (33000),
            PARTITION p11 VALUES LESS THAN (36000),
            PARTITION p12 VALUES LESS THAN (39000),
            PARTITION p13 VALUES LESS THAN (42000),
            PARTITION p14 VALUES LESS THAN (45000),
            PARTITION p15 VALUES LESS THAN (48000),
            PARTITION p16 VALUES LESS THAN (51000),
            PARTITION p17 VALUES LESS THAN (54000),
            PARTITION p18 VALUES LESS THAN (57000),
            PARTITION p19 VALUES LESS THAN (60000),
            PARTITION p20 VALUES LESS THAN (63000)
        )"
    )
    .execute(&pool)
    .await?;

    // Insert 3000 rows into each of the 20 partitions
    for partition in 1..=20 {
        let partition_key = partition * 3000 + 1; // This ensures each partition key is unique

        for _ in 0..3000 {
            let row = TableRow {
                id: Faker.fake::<i64>(), // Generate a unique id
                partition_key, // Use the current partition key
                column1: Faker.fake::<String>(),
                column2: Faker.fake::<i32>(),
                column3: Faker.fake::<i32>(),
                column4: Faker.fake::<String>(),
            };

            sqlx::query(
                "INSERT INTO t (id, partition_key, column1, column2, column3, column4)
                VALUES (?, ?, ?, ?, ?, ?)"
            )
            .bind(row.id)
            .bind(row.partition_key)
            .bind(&row.column1)
            .bind(row.column2)
            .bind(row.column3)
            .bind(&row.column4)
            .execute(&pool)
            .await?;
        }

        println!("Successfully inserted 3000 rows into partition {} of the 't' table.", partition);
    }

    Ok(())
}
  1. Check logs:
[2024/09/12 17:36:31.192 +08:00] [INFO] [refresher.go:127] ["Auto analyze triggered"] [category=stats] [job="DynamicPartitionedTableAnalysisJob:\n\tAnalyzeType: analyzeDynamicPartitionIndex\n\tPartitions: p2, p3, p6, p7, p9, p17, p19, p20, p4, p13, p16, p5, p11, p12, p15, p18, p1, p8, p10, p14\n\tPartitionIndexes: map[PRIMARY:[p3 p6 p7 p9 p17 p19 p20 p2 p13 p16 p4 p11 p12 p15 p18 p5 p8 p10 p14 p1] idx_column1:[p13 p16 p4 p11 p12 p15 p18 p5 p8 p10 p14 p1 p3 p6 p7 p9 p17 p19 p20 p2]]\n\tSchema: test\n\tGlobal Table: t\n\tGlobal TableID: 104\n\tTableStatsVer: 2\n\tChangePercentage: 1.000000\n\tTableSize: 18000.00\n\tLastAnalysisDuration: 30m0s\n\tWeight: 3.368389\n"]

Copy link
Member Author

@Rustin170506 Rustin170506 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔢 Self-check (PR reviewed by myself and ready for feedback.)

@Rustin170506
Copy link
Member Author

/retest

1 similar comment
@Rustin170506
Copy link
Member Author

/retest

Signed-off-by: Rustin170506 <29879298+Rustin170506@users.noreply.github.com>
@Rustin170506
Copy link
Member Author

Tested agian:

  1. Start the TiDB cluster: tiup playground v8.2.0 --db.binpath /Users/rustin/code/tidb/bin/tidb-server
  2. Create some tables and insert data:
#!/usr/bin/env -S cargo +nightly -Zscript
---cargo
[dependencies]
clap = { version = "4.2", features = ["derive"] }
sqlx = { version = "0.7", features = ["runtime-tokio-rustls", "mysql"] }
tokio = { version = "1", features = ["full"] }
fake = { version = "2.5", features = ["derive"] }
---

use clap::Parser;
use fake::{Fake, Faker};
use sqlx::mysql::MySqlPoolOptions;

#[derive(Parser, Debug)]
#[clap(version)]
struct Args {
    #[clap(short, long, help = "MySQL connection string")]
    database_url: String,
}

#[derive(Debug)]
struct TableRow {
    id: i64,
    partition_key: u32,
    column1: String,
    column2: i32,
    column3: i32,
    column4: String,
}

#[tokio::main]
async fn main() -> Result<(), sqlx::Error> {
    let args = Args::parse();

    let pool = MySqlPoolOptions::new()
        .max_connections(5)
        .connect(&args.database_url)
        .await?;

    // Create partitioned table if not exists
    sqlx::query(
        "CREATE TABLE IF NOT EXISTS t (
            id BIGINT NOT NULL,
            partition_key INT NOT NULL,
            column1 VARCHAR(255) NOT NULL,
            column2 INT NOT NULL,
            column3 INT NOT NULL,
            column4 VARCHAR(255) NOT NULL,
            PRIMARY KEY (id, partition_key),
            index idx_column1 (column1)
        ) PARTITION BY RANGE (partition_key) (
            PARTITION p0 VALUES LESS THAN (3000),
            PARTITION p1 VALUES LESS THAN (6000),
            PARTITION p2 VALUES LESS THAN (9000),
            PARTITION p3 VALUES LESS THAN (12000),
            PARTITION p4 VALUES LESS THAN (15000),
            PARTITION p5 VALUES LESS THAN (18000),
            PARTITION p6 VALUES LESS THAN (21000),
            PARTITION p7 VALUES LESS THAN (24000),
            PARTITION p8 VALUES LESS THAN (27000),
            PARTITION p9 VALUES LESS THAN (30000),
            PARTITION p10 VALUES LESS THAN (33000),
            PARTITION p11 VALUES LESS THAN (36000),
            PARTITION p12 VALUES LESS THAN (39000),
            PARTITION p13 VALUES LESS THAN (42000),
            PARTITION p14 VALUES LESS THAN (45000),
            PARTITION p15 VALUES LESS THAN (48000),
            PARTITION p16 VALUES LESS THAN (51000),
            PARTITION p17 VALUES LESS THAN (54000),
            PARTITION p18 VALUES LESS THAN (57000),
            PARTITION p19 VALUES LESS THAN (60000),
            PARTITION p20 VALUES LESS THAN (63000)
        )"
    )
    .execute(&pool)
    .await?;

    // Insert 3000 rows into each of the 20 partitions
    for partition in 1..=20 {
        let partition_key = partition * 3000 + 1; // This ensures each partition key is unique

        for _ in 0..3000 {
            let row = TableRow {
                id: Faker.fake::<i64>(), // Generate a unique id
                partition_key, // Use the current partition key
                column1: Faker.fake::<String>(),
                column2: Faker.fake::<i32>(),
                column3: Faker.fake::<i32>(),
                column4: Faker.fake::<String>(),
            };

            sqlx::query(
                "INSERT INTO t (id, partition_key, column1, column2, column3, column4)
                VALUES (?, ?, ?, ?, ?, ?)"
            )
            .bind(row.id)
            .bind(row.partition_key)
            .bind(&row.column1)
            .bind(row.column2)
            .bind(row.column3)
            .bind(&row.column4)
            .execute(&pool)
            .await?;
        }

        println!("Successfully inserted 3000 rows into partition {} of the 't' table.", partition);
    }

    Ok(())
}
  1. Check logs:
[2024/09/14 11:11:03.619 +08:00] [INFO] [worker.go:91] ["Job submitted"] [category=stats] [job="NonPartitionedTableAnalysisJob:\n\tAnalyzeType: analyzeTable\n\tIndexes: \n\tSchema: test\n\tTable: t10\n\tTableID: 124\n\tTableStatsVer: 2\n\tChangePercentage: 1.000000\n\tTableSize: 15000.00\n\tLastAnalysisDuration: 30m0s\n\tWeight: 1.376307\n"]
  1. Alter a table to add a new column:
#!/usr/bin/env -S cargo +nightly -Zscript
---cargo
[dependencies]
clap = { version = "4.2", features = ["derive"] }
sqlx = { version = "0.7", features = ["runtime-tokio-rustls", "mysql"] }
tokio = { version = "1", features = ["full"] }
fake = { version = "2.5", features = ["derive"] }
---

use clap::Parser;
use fake::{Fake, Faker};
use sqlx::mysql::MySqlPoolOptions;

#[derive(Parser, Debug)]
#[clap(version)]
struct Args {
    #[clap(short, long, help = "MySQL connection string")]
    database_url: String,
}

#[derive(Debug)]
struct TableRow {
    id: i64,
    column1: String,
    column2: i32,
    column3: i32,
    column4: String,
}

#[tokio::main]
async fn main() -> Result<(), sqlx::Error> {
    let args = Args::parse();

    let pool = MySqlPoolOptions::new()
        .max_connections(5)
        .connect(&args.database_url)
        .await?;

    // Update the first table and insert new data
    update_first_table(&pool).await?;

    Ok(())
}

async fn update_first_table(pool: &sqlx::MySqlPool) -> Result<(), sqlx::Error> {
    let table_name = "t0";

    // Add a new column to the first table
    let alter_table_query = format!(
        "ALTER TABLE {} ADD COLUMN new_column VARCHAR(255)",
        table_name
    );
    sqlx::query(&alter_table_query).execute(pool).await?;
    println!("Added new_column to table {}", table_name);

    // Insert 5000 rows with the new column
    for _ in 0..5000 {
        let row = TableRow {
            id: Faker.fake::<i64>(),
            column1: Faker.fake::<String>(),
            column2: Faker.fake::<i32>(),
            column3: Faker.fake::<i32>(),
            column4: Faker.fake::<String>(),
        };
        let new_column_value: String = Faker.fake();

        let insert_query = format!(
            "INSERT INTO {} (id, column1, column2, column3, column4, new_column)
            VALUES (?, ?, ?, ?, ?, ?)",
            table_name
        );

        sqlx::query(&insert_query)
            .bind(row.id)
            .bind(&row.column1)
            .bind(row.column2)
            .bind(row.column3)
            .bind(&row.column4)
            .bind(&new_column_value)
            .execute(pool)
            .await?;
    }

    println!("Successfully inserted 5000 rows with new_column into table '{}'.", table_name);

    Ok(())
}
  1. Check logs:
[2024/09/14 11:13:15.616 +08:00] [INFO] [worker.go:91] ["Job submitted"] [category=stats] [job="NonPartitionedTableAnalysisJob:\n\tAnalyzeType: analyzeTable\n\tIndexes: \n\tSchema: test\n\tTable: t0\n\tTableID: 104\n\tTableStatsVer: 2\n\tChangePercentage: 0.625000\n\tTableSize: 48000.00\n\tLastAnalysisDuration: 2m38.997s\n\tWeight: 1.053691\n"]

Copy link
Contributor

@elsa0520 elsa0520 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ti-chi-bot ti-chi-bot bot added approved needs-1-more-lgtm Indicates a PR needs 1 more LGTM. labels Sep 14, 2024
@Rustin170506
Copy link
Member Author

/retest

Signed-off-by: Rustin170506 <29879298+Rustin170506@users.noreply.github.com>
Copy link

tiprow bot commented Sep 14, 2024

@Rustin170506: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
fast_test_tiprow 88b71bb link true /test fast_test_tiprow

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Copy link

ti-chi-bot bot commented Sep 14, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: elsa0520, winoros

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot bot added lgtm and removed needs-1-more-lgtm Indicates a PR needs 1 more LGTM. labels Sep 14, 2024
Copy link

ti-chi-bot bot commented Sep 14, 2024

[LGTM Timeline notifier]

Timeline:

  • 2024-09-14 04:38:23.860319761 +0000 UTC m=+676773.600743701: ☑️ agreed by elsa0520.
  • 2024-09-14 07:53:58.596237658 +0000 UTC m=+688508.336661587: ☑️ agreed by winoros.

@ti-chi-bot ti-chi-bot bot merged commit 3688a2b into pingcap:master Sep 14, 2024
23 of 24 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved lgtm release-note-none Denotes a PR that doesn't merit a release note. sig/planner SIG: Planner size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants