Skip to content

test: add a test for large record batches #271

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

plaflamme
Copy link
Contributor

assertion failed: data.len() <= self.capacity()
stack backtrace:
   0: rust_begin_unwind
             at /rustc/11f32b73e0dc9287e305b5b9980d24aecdc8c17f/library/std/src/panicking.rs:647:5
   1: core::panicking::panic_fmt
             at /rustc/11f32b73e0dc9287e305b5b9980d24aecdc8c17f/library/core/src/panicking.rs:72:14
   2: core::panicking::panic
             at /rustc/11f32b73e0dc9287e305b5b9980d24aecdc8c17f/library/core/src/panicking.rs:144:5
   3: duckdb::vtab::vector::FlatVector::copy
             at .cargo/registry/src/index.crates.io-6f17d22bba15001f/duckdb-0.10.0/src/vtab/vector.rs:86:9
   4: duckdb::vtab::arrow::primitive_array_to_flat_vector
             at .cargo/registry/src/index.crates.io-6f17d22bba15001f/duckdb-0.10.0/src/vtab/arrow.rs:247:5
   5: duckdb::vtab::arrow::primitive_array_to_vector
             at .cargo/registry/src/index.crates.io-6f17d22bba15001f/duckdb-0.10.0/src/vtab/arrow.rs:298:13
   6: duckdb::vtab::arrow::record_batch_to_duckdb_data_chunk
             at .cargo/registry/src/index.crates.io-6f17d22bba15001f/duckdb-0.10.0/src/vtab/arrow.rs:213:17
   7: duckdb::appender::arrow::<impl duckdb::appender::Appender>::append_record_batch
             at .cargo/registry/src/index.crates.io-6f17d22bba15001f/duckdb-0.10.0/src/appender/arrow.rs:39:9

@mlafeldt mlafeldt self-assigned this Jul 7, 2025
mlafeldt added a commit that referenced this pull request Jul 8, 2025
Based on the test from #271 by @plaflamme, this PR enhances/fixes
`append_record_batch` to automatically chunk the passed record batch
into smaller pieces (using Arrow's zero-copy slicing) that fit within
DuckDB's vector size limit.
@mlafeldt
Copy link
Member

mlafeldt commented Jul 8, 2025

@plaflamme Thanks for the test! I merged it as part of #530, which implements proper chunking.

@mlafeldt mlafeldt closed this Jul 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants