-
AI-native data storage for CarbonData's new scope, In AI projects, data scientists and engineers spend 80% of their time on data preparation. Traditional storage presents numerous bottlenecks in this process:
- Data silos: Training data may be scattered across data lakes, data warehouses, file systems, object storage, and other locations, making integration difficult.
- Performance bottlenecks:Training phase: High-speed, low-latency data throughput is required to feed GPUs to avoid expensive GPU resources sitting idle.
- Inference phase: High-concurrency, low-latency vector similarity search capabilities are required.
- Complex data formats: AI processes data types far beyond tables, including unstructured data (images, videos, text, audio) and semi-structured data (JSON, XML). Traditional databases have limited capabilities for processing and querying such data.
- Lack of metadata management: The lack of effective management of rich metadata such as data versions, lineage, annotation information, and experimental parameters leads to poor experimental reproducibility.
- Vectorization requirements: Modern AI models (such as large language models) convert all data into vector embeddings. Traditional storage cannot efficiently store and retrieve high-dimensional vectors.
-
In the previous releases Apache CarbonData is an indexed columnar data store solution for fast analytics on big data platform, e.g. Apache Hadoop, Apache Spark, etc. You can find the latest CarbonData document and learn more at: https://carbondata.apache.org
CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema, complex data type etc, and CarbonData has following unique features:
- Stores data along with index: it can significantly accelerate query performance and reduces the I/O scans and CPU resources, where there are filters in the query. CarbonData index consists of multiple level of indices, a processing framework can leverage this index to reduce the task it needs to schedule and process, and it can also do skip scan in more finer grain unit (called blocklet) in task side scanning instead of scanning the whole file.
- Operable encoded data: through supporting efficient compression and global encoding schemes, can query on compressed/encoded data, the data can be converted just before returning the results to the users, which is "late materialized".
- Supports for various use cases with one single Data format : like interactive OLAP-style query, Sequential Access (big scan), Random Access (narrow scan).
CarbonData is built using Apache Maven, to build CarbonData
- What is CarbonData
- Quick Start
- Use Cases
- Language Reference
- CarbonData Data Definition Language
- CarbonData Data Manipulation Language
- CarbonData Streaming Ingestion
- Configuring CarbonData
- Index Developer Guide
- Data Types
- CarbonData Index Management
- CarbonData BloomFilter Index
- CarbonData Lucene Index
- CarbonData MV
- Carbondata Secondary Index
- Heterogeneous format segments in carbondata
- SDK Guide
- C++ SDK Guide
- Performance Tuning
- S3 Storage
- Distributed Index Server
- CDC and SCD
- Carbon as Spark's Datasource
- FAQs
Some features are marked as experimental because the syntax/implementation might change in the future.
- Hybrid format table using Add Segment.
- Accelerating performance using MV on parquet/orc.
- Merge API for Spark DataFrame.
- Hive write for non-transactional table.
- Secondary Index as a Coarse Grain Index in query processing.
This is an active open source project for everyone, and we are always open to people who want to use this system or contribute to it. This guide document introduces how to contribute to CarbonData.
To get involved in CarbonData:
-
First join by emailing to dev-subscribe@carbondata.apache.org, then you can discuss issues by emailing to dev@carbondata.apache.org. You can also directly visit dev@carbondata.apache.org. Or you can visit Apache CarbonData Dev Mailing List archive.
-
Report issues on github issues.
-
You can also slack to get in touch with the community. After we invite you, you can use this Slack Link to sign in to CarbonData.
Apache CarbonData is an open source project of The Apache Software Foundation (ASF).