pyspark.sql.streaming.DataStreamReader.parquet#
- DataStreamReader.parquet(path, mergeSchema=None, pathGlobFilter=None, recursiveFileLookup=None, datetimeRebaseMode=None, int96RebaseMode=None)[source]#
Loads a Parquet file stream, returning the result as a
DataFrame
.New in version 2.0.0.
Changed in version 3.5.0: Supports Spark Connect.
- Parameters
- pathstr
the path in any Hadoop supported file system
- Other Parameters
- Extra options
For the extra options, refer to Data Source Option. in the version you use.
Examples
Load a data stream from a temporary Parquet file.
>>> import tempfile >>> import time >>> with tempfile.TemporaryDirectory(prefix="parquet") as d: ... # Write a temporary Parquet file to read it. ... spark.range(10).write.mode("overwrite").format("parquet").save(d) ... ... # Start a streaming query to read the Parquet file. ... q = spark.readStream.schema( ... "id LONG").parquet(d).writeStream.format("console").start() ... time.sleep(3) ... q.stop()