top of page

Apache Arrow DataFusion with Comet Plugin

In big data, Apache Arrow DataFusion has become a strong tool for handling queries. It uses memory efficiently and is built with Rust, a programming language, which makes it a favorite among developers for creating good data-focused systems. Now, imagine if we could make it even better. That’s where Comet comes in. Comet is an add-on for Apache Arrow DataFusion that aims to change how we manage Spark SQL queries. This article will take a deep dive into Comet, looking at what it does, the advantages it offers, and how it can make Spark SQL queries faster and more efficient.


Table of Contents:


Apache Arrow DataFusion

Apache Arrow DataFusion is a very fast, extensible query engine for building high-quality data-centric systems in Rust, using the Apache Arrow in-memory format. It offers SQL and Dataframe APIs, excellent performance, built-in support for CSV, Parquet, JSON, and Avro, extensive customization, and a great community. DataFusion is part of the Apache Arrow project.


Introduction to Comet Plugin in DataFusion

Comet is an Apache Spark plugin that uses Apache Arrow DataFusion as its native runtime. It is designed to improve query efficiency and runtime. Comet runs Spark SQL queries using the native DataFusion runtime, which is typically faster and more resource-efficient than JVM-based runtimes.


Comet aims to accelerate Spark workloads by optimizing query execution and leveraging hardware accelerators. It is designed as a drop-in replacement for Spark’s JVM-based SQL execution engine and offers significant performance improvements for some workloads.


Comet runs Spark SQL queries using the native DataFusion runtime. This means it can process queries more efficiently, reducing the time it takes to return results. Furthermore, Comet is designed to take advantage of hardware accelerators. These are the hardware components to perform certain tasks more efficiently than general-purpose CPUs, further improving query performance.


Comet was originally implemented at Apple and the engineers who worked on the project are also significant contributors to Arrow and DataFusion. Bringing Comet into the Apache Software Foundation will accelerate its development and grow its community of contributors and users.


Architecture

The architecture of Apache Arrow DataFusion Comet can be broken down into two main execution paths:

  1. JVM path

  2. Native path


Apache Arrow DataFusion Comet

Both paths take data from the data sources like Hive, Parquet, and Iceberg.


JVM path: This path uses the traditional Spark SQL execution engine.

  1. Data is first scanned from the data source and then filtered or transformed.

  2. Next Apache Arrow columnar batches are created. These batches are then shuffled across the cluster and joins are performed.

  3. Finally, the data is returned.


Native path: This path leverages DataFusion, a fast, in-memory query engine.

  1. Data is scanned from the data source using a Parquet native reader and then filtered or transformed.

  2. Similar to the JVM path, Apache Arrow columnar batches are created. These batches are then shuffled across the cluster and joins are performed using SIMD accelerated columnar execution.

  3. Finally, the data is returned in Arrow columnar format.


Here is the working of Apache Arrow DataFusion Comet:

  1. Start a Spark job stage. This initiates a series of tasks within Spark to complete a specific job.

  2. Data source: The Spark job reads data from external data sources like Hive, Parquet, or Iceberg.

  3. Execution path selection: Apache Arrow DataFusion Comet decides whether to use the JVM path or the native path for query execution. It considers factors like query complexity and supported features by DataFusion.

  4. Data scan (native path): If the native path is chosen, Comet utilizes a Parquet native reader to scan the data in its columnar format.

  5. Data processing (native path): The data is then filtered or transformed as per the query requirements.

  6. Columnar execution (native path): Apache Arrow columnar batches are created for efficient processing using DataFusion’s SIMD accelerated engine. This leverages vectorized instructions for faster computations.

  7. Shuffle and join (native path): If the query involves joins or aggregations, the data is shuffled across the cluster for efficient processing. Joins are performed using SIMD accelerated columnar execution within DataFusion.

  8. Data exchange (both paths): Data is exchanged between the JVM and native execution spaces using Apache Arrow columnar format for efficient communication.

  9. Fallback (if needed): In case Comet encounters an unsupported expression or feature during the native path execution, it seamlessly falls back to Spark’s traditional JVM-based execution engine to ensure query completion.

  10. Return results: Finally, the processed data is returned to the user, potentially still in columnar format for further analysis.


Comet achieves faster performance by using DataFusion’s SIMD vectorized execution engine. DataFusion is written in Rust which can directly access memory, unlike Spark which is written in Java and runs on the JVM. Comet can fall back to the JVM path for cases where DataFusion doesn’t support a particular expression.


Benefits of using Comet

Comet, an Apache Spark plugin that uses Apache Arrow DataFusion as its native runtime, offers several benefits:

  • Improved Query Efficiency: Comet optimizes the execution of Spark SQL queries, which can process queries more efficiently.

  • Optimized Runtime: Comet uses the native DataFusion runtime, which is typically faster and more resource-efficient than JVM-based runtimes.

  • Hardware Acceleration: Comet is designed to take advantage of hardware accelerators, further improving query performance.

  • Feature Parity with Apache Spark: Comet strives to keep feature parity with Apache Spark, ensuring a seamless transition and consistent performance.

  • Automatic Fallback: Comet automatically detects unsupported features and falls back to the Spark engine, ensuring that all queries can be executed.


Limitations of using Comet

There are a few limitations to using Comet:

  1. Java Version: Comet has some issues with Java 1.8 due to the git-commit-id-maven-plugin dropping Java 1.8 support. However, the README indicates that it only requires JDK 8 and up.

  2. Shuffle Limitations: The Comet native shuffle only supports hash partitioning and single partitioning and doesn’t support complex types yet. Additional configuration is required. to enable columnar shuffle, which supports all partitioning and basic complex types.


Use Cases and Applications

Apache Arrow DataFusion Comet is designed to make it suitable for scenarios where high-performance data processing is required. Here are some potential use cases:

  • Large-Scale Data Processing: Comet can be used in scenarios where large volumes of data need to be processed quickly and efficiently.

  • Data Analytics: Comet’s improved query efficiency makes it suitable for data analytics tasks, where quick insights from data are required.

  • Machine Learning: The efficient data processing capabilities of Comet can be beneficial in machine learning workflows, where large datasets are often used for training models.

  • Real-Time Processing: Comet’s optimized runtime can be advantageous in real-time data processing scenarios, where low latency is crucial.


How does Comet integrate with Apache Arrow DataFusion?

Below is the step-by-step process to integrate Comet with Apache Arrow DataFusion in a Spark environment, improving the efficiency and runtime of Spark SQL queries:


STEP 1: Getting Started

Ensure that all the requirements are met and the required software is installed on your machine.


STEP 2: Clone the Repository


STEP 3: Specify the Spark Version and Build Comet

cd arrow-datafusion-comet 
make release PROFILES="-Pspark-3.4"

STEP 4: Run Spark with Comet Enabled

Ensure that the SPARK_HOME environment variable points to the same Spark version that Comet was built for.

$SPARK_HOME/bin/spark-shell --jars spark/target/comet-spark-spark3.4_2.12-0.1.0-SNAPSHOT.jar \ --conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions \ --conf spark.comet.enabled=true \ --conf spark.comet.exec.enabled=true \ --conf spark.comet.exec.all.enabled=true

STEP 5: Verify Comet is Enabled for Spark SQL Query

Create a test Parquet source using the Scala command:

(0 until 10).toDF("a").write.mode("overwrite").parquet("/tmp/test")

Query the data from the test source and check the INFO message and the query plan to ensure that Comet operators are being used instead of Spark ones:

spark.read.parquet("/tmp/test").createOrReplaceTempView("t1") spark.sql("select * from t1 where a > 5").explain

STEP 6: Enable Comet Shuffle

The Comet shuffle feature is disabled by default. To enable it, add the related configurations:

--conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager --conf spark.comet.exec.shuffle.enabled=true

The above configurations enable Comet native shuffle which only supports hash partitioning and single partition. Comet native shuffle doesn’t support complex types yet.


To enable columnar shuffle which supports all partitioning and basic complex types, one more configuration is required:

--conf spark.comet.columnar.shuffle.enabled=true

Feature Parity with Apache Spark

Feature parity refers to the idea that two or more products (in this case, Apache Spark and the Comet plugin) offer the same or equivalent features. The goal is to maintain feature parity with Apache Spark, meaning it aims to support all the features that Apache Spark does.


How Comet Maintains Feature Parity with Apache Spark

Comet strives to ensure users experience the same behavior whether Comet is enabled or disabled in their Spark jobs. This includes the same features, configurations, and query results. If Comet encounters a feature it does not support, it automatically detects this and falls back to the Spark engine. This ensures that all queries can be executed, even if some features are not yet supported by Comet.


To verify that Comet maintains feature parity with Apache Spark, the Comet project uses unit tests within Comet itself. Additionally, it re-uses Spark SQL tests and ensures that they all pass with the Comet extension enabled.


Benefits of Feature Parity

  • Consistency for the User: Since Comet maintains feature parity with Apache Spark, users can expect the same behavior whether they are using Comet or not.

  • Ease of Integration: This consistency makes it easier for users to integrate Comet into their existing Spark workflows, as they do not have to worry about unsupported features or different behaviors.

  • Automatic Fallback: Comet automatically falls back to the Spark engine for unsupported features, ensuring that all queries can be executed.

  • Seamless User Experience: This automatic fallback provides a seamless user experience.

  • Performance Improvements: Users can benefit from the performance improvements offered by Comet, without losing access to any of the features provided by Apache Spark.


Code Example

Here’s a simple code example to illustrate how you might use Comet in a Spark job:

// Import necessary libraries
import org.apache.spark.sql.SparkSession

// Create a SparkSession
val spark = SparkSession.builder()
  .appName("Comet Example")
  .getOrCreate()

// Enable Comet
spark.conf.set("spark.sql.extensions", "org.apache.comet.CometSparkSessionExtensions")
spark.conf.set("spark.comet.enabled", "true")
spark.conf.set("spark.comet.exec.enabled", "true")
spark.conf.set("spark.comet.exec.all.enabled", "true")

// Run a Spark SQL query
val df = spark.sql("SELECT * FROM table")
df.show()

In this example, we first create a SparkSession, the entry point to any Spark functionality. Then we enable Comet by setting several configurations. Finally, we run a Spark SQL query and display the results. If the query uses a feature that does not support it, Comet will automatically fall back to the Spark engine.


This ensures that the query will execute successfully, regardless of whether Comet supports all the features used in the query. This is a simple example and real-world usage may be more complex, depending on the specific use case and environment.


Fallback Mechanism in Comet

The fallback mechanism in Comet is a crucial feature that ensures the smooth execution of Spark jobs.


Here’s how it works:

  • Automatic Detection: Comet is designed to detect any unsupported features when executing a Spark job.

  • Fallback to Spark Engine: If Comet encounters a feature that does not support it, it automatically falls back to the Spark engine. This means that the Spark job will continue to execute using the original Spark engine for that particular operation.

  • Seamless Execution: This fallback mechanism ensures that all queries can be executed seamlessly, regardless of whether Comet supports all the features used in the query.

  • Consistent User Experience: Users can expect the same behavior with Comet turned on or off in their Spark jobs.


This mechanism is part of Comet’s effort to maintain feature parity with Apache Spark, providing a consistent and reliable user experience.


Conclusion

The Comet plugin for Apache Arrow DataFusion represents a significant stride forward in the field of data processing. By maintaining feature parity with Apache Spark, providing an automatic fallback mechanism, and offering improved query efficiency and optimized runtime, Comet ensures a seamless and enhanced user experience. As Comet continues to evolve and mature, we can expect it to play an increasingly pivotal role in the landscape of big data processing, paving the way for faster, more efficient data-centric systems.

bottom of page