Newsletter
Tolle Vorteile
✓ ab 50 € kostenloser Versand in DE
✓ sonst nur 5,90 € (in DE)
✓ schnelle Lieferzeiten
✓ PayPal Käuferschutz und in 30 Tagen bezahlen
✓ Rechnungs- & Ratenkauf über Klarna
FriseurTeam
Shop-Hotline
Mo. - Fr. 10 - 16 Uhr
val result = df .groupBy($"department") .agg(count("*").as("emp_cnt"), avg($"salary").as("avg_salary")) .filter($"emp_cnt" > 5)
---
sc = SparkContext(appName="WordCount") lines = sc.textFile("hdfs:///data/myfile.txt") spark 2 workbook answers
Add a short paragraph for each stage, explaining why you chose that API.
print(f"Unique words: unique_word_count") val result = df
| Tip | How to Apply | |-----|--------------| | **Show Spark’s lazy evaluation** | Mention that transformations build a DAG, actions trigger execution. | | **Explain the physical plan** | Use `df.explain()` in a note to demonstrate understanding of shuffle, broadcast, etc. | | **State assumptions** | “Assume the input file fits in HDFS and each line is a UTF‑8 string.” | | **Edge‑case handling** | Talk about empty files, null values, or malformed CSV rows. | | **Performance hints** | Suggest `repartition` before a heavy shuffle or using `broadcast` for small lookup tables. | | **Testing** | Show a tiny local test (e.g., `sc.parallelize(["a b","b c"]).flatMap(...).collect()`). | | **Clean code** | Use meaningful variable names, consistent indentation, and short comments. |
# 2️⃣ Split lines into words and clean them words = lines.flatMap(lambda line: line.split()) \ .map(lambda w: w.lower().strip('.,!?"\'')) | | **State assumptions** | “Assume the input
words = lines.flatMap(lambda line: line.split()) # optional cleaning cleaned = words.map(lambda w: w.lower().strip('.,!?"\'')) distinct_words = cleaned.distinct() count = distinct_words.count()
✓ ab 50 € kostenloser Versand in DE
✓ sonst nur 5,90 € (in DE)
✓ schnelle Lieferzeiten
✓ PayPal Käuferschutz und in 30 Tagen bezahlen
✓ Rechnungs- & Ratenkauf über Klarna