Skip to main content

PySpark in 7 Days (bonus2)

PySpark Cheat Sheet – Data Engineer Edition

TL;DR
• SparkSession is the entry point
• Transformations are lazy, actions trigger execution
• Avoid shuffles, prefer broadcast joins
• Use Parquet/Delta, not CSV
• Window functions = “Top N per group”
• repartition increases, coalesce reduces partitions
• Always think: DAG → stages → tasks



1. Spark Session

from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName("AppName") \ .getOrCreate()

2. Reading Data

FormatExample
CSVspark.read.option("header","true").csv(path)
JSONspark.read.json(path)
Parquetspark.read.parquet(path)
Deltaspark.read.format("delta").load(path)

3. Writing Data

df.write.mode("overwrite").parquet(path) df.write.partitionBy("date").parquet(path)

4. DataFrame Basics

df.show() df.printSchema() df.count() df.columns

5. Select / Filter / withColumn

from pyspark.sql.functions import col df.select("id","name") df.filter(col("salary") > 50000) df.withColumn("bonus", col("salary") * 0.1)

6. Null Handling

df.dropna() df.fillna(0) df.filter(col("col").isNotNull())

7. Joins

TypeSyntax
Innerdf1.join(df2, "id", "inner")
Leftdf1.join(df2, "id", "left")
Rightdf1.join(df2, "id", "right")
Fulldf1.join(df2, "id", "outer")

Broadcast Join

from pyspark.sql.functions import broadcast df.join(broadcast(dim_df), "id")

8. Aggregations

from pyspark.sql.functions import avg, count, max df.groupBy("dept").agg( count("*"), avg("salary"), max("salary") )

9. Window Functions

from pyspark.sql.window import Window from pyspark.sql.functions import row_number, rank, dense_rank w = Window.partitionBy("dept").orderBy(col("salary").desc()) df.withColumn("rn", row_number().over(w))

10. Spark SQL

df.createOrReplaceTempView("employees") spark.sql(""" SELECT dept, AVG(salary) FROM employees GROUP BY dept """)

11. Partitions & Performance

OperationUse Case
repartition(n)Increase / rebalance partitions (shuffle)
coalesce(n)Reduce partitions (no shuffle)
df.repartition(10) df.coalesce(1)

12. Cache & Persist

df.cache() from pyspark import StorageLevel df.persist(StorageLevel.MEMORY_AND_DISK)

13. Execution Plan & Debugging

df.explain() df.explain(True)

14. Delta Lake (Must-Know)

MERGE INTO target t USING source s ON t.id = s.id WHEN MATCHED THEN UPDATE SET * WHEN NOT MATCHED THEN INSERT *

15. Structured Streaming (Basics)

df.writeStream \ .format("delta") \ .option("checkpointLocation", "/chk") \ .start("/out")

16. Common Interview Triggers

  • Slow job → shuffle or skew
  • Top N per group → window functions
  • Small lookup table → broadcast join
  • Repeated use → cache
  • Analytics storage → Parquet / Delta

Comments

Popular posts from this blog

Exploring the Largest UK Employers: A Power BI Visualization

Understanding employment distribution among top companies can provide valuable insights into industry dominance and workforce trends. In this blog, I analyze the largest employers in the UK using a Power BI table visualization, sourced from CompaniesMarketCap . Source:  CompaniesMarketCap Key Insights from the Data: Compass Group leads the ranking with 550,000 employees, dominating the food service industry. Tesco, the retail giant, follows with 330,000 employees. HSBC, a major player in banking, employs over 215,000 people. The total workforce among the top companies surpasses 1.98 million employees. Visualizing in Power BI: Using a table visualization, we can clearly compare the number of employees across different companies. Power BI’s sorting, aggregation, and filtering features enhance data readability and analysis. However, incorporating bar charts, conditional formatting, and KPIs could make the insights even more compelling. What’s Next? Would you add more interactive eleme...

Master Databricks Asset Bundles Through Hands-On Practice

15 min read | 100% Practical Guide Forget theory. Forget abstract examples. This is a hands-on, build-as-you-learn guide to mastering YAML through the lens of Databricks Asset Bundles (DABs) . By the end of this post, you'll go from never writing YAML to confidently deploying production-grade data pipelines as code. 🎯 What You'll Build: A complete Databricks workspace configuration including jobs, clusters, notebooks, and permissions—all defined in YAML and deployable with a single command. Level 0: YAML Basics BEGINNER The Golden Rules Rule #1: YAML uses spaces for indentation , never tabs. Standard is 2 spaces per level. Rule #2: YAML is case-sensitive . Name ≠ name Rule #3: Indentation = Structure . It defines parent-child relationships. ...

PySpark Important Last-Minute Notes

If you are preparing for Data Engineering interviews , Spark projects , or need a quick PySpark revision , this post consolidates the most important PySpark concepts in one place. Best for: Data Engineers, Big Data Developers, Azure/Databricks/Microsoft Fabric users, and anyone doing last-minute interview preparation. What is PySpark? PySpark is the Python API for Apache Spark , an open-source distributed computing framework used for large-scale data processing. Why PySpark? Distributed, in-memory processing Faster than traditional batch systems Scales across clusters Supports SQL, Streaming, and Machine Learning Common use cases ETL / ELT pipelines Big data analytics Machine learning at scale Real-time and batch processing Spark Cluster Architecture A Spark cluster typically consists of: Master Node: Manages resources and schedules tasks Worker Nodes: Execute tasks in parallel This architecture enables efficient distributed p...