writing a csv with column names and reading a csv file which is being generated from a sparksql dataframe in Pyspark

Answer #1 100 %

Try

df.coalesce(1).write.format('com.databricks.spark.csv').save('path+my.csv',header = 'true')

Note that this may not be an issue on your current setup, but on extremely large datasets, you can run into memory problems on the driver. This will also take longer (in a cluster scenario) as everything has to push back to a single location.

Answer #2 100 %

Just in case, on spark 2.1 you can create a single csv file with the following lines

dataframe.coalesce(1) //So just a single part- file will be created
.write.mode(SaveMode.Overwrite)
.option("mapreduce.fileoutputcommitter.marksuccessfuljobs","false") //Avoid creating of crc files
.option("header","true") //Write the header
.csv("csvFullPath")
Answer #3 100 %

with spark >= 2.o, we can do something like

df = spark.read.csv('path+filename.csv', sep = 'ifany',header='true')
df.write.csv('path_filename of csv',header=True) ###yes still in partitions
df.toPandas().to_csv('path_filename of csv',index=False)  ###single csv(Pandas Style)
Answer #4 91.6 %

The following should do the trick:

df \
  .write \
  .mode('overwrite') \
  .option('header', 'true') \
  .csv('output.csv')

Alternatively, if you want the results to be in a single partition, you can use coalesce(1):

df \
  .coalesce(1) \
  .write \
  .mode('overwrite') \
  .option('header', 'true') \
  .csv('output.csv')

Note however that this is an expensive operation and might not be feasible with extremely large datasets.

You’ll also like:


© 2023 CodeForDev.com -