Notebook: Learning PySpark - Resilient Distributed Datasets
Learning PySpark notebook (Chapter 03) with comments.
Learning PySpark - Resilient Distributed Datasets
Creating RDDs
There are two ways to create an RDD in PySpark. You can parallelize a list
import pyspark
sc = pyspark.SparkContext()
data = sc.parallelize(
[('Amber', 22), ('Alfred', 23), ('Skye',4), ('Albert', 12),
('Amber', 9)])
.collect() returns all elements and seralized it into a list
data.collect()
[('Amber', 22), ('Alfred', 23), ('Skye', 4), ('Albert', 12), ('Amber', 9)]
or read from a repository (a file or a database)
data_from_file = sc.textFile('./VS14MORT.txt.gz', 4)
# 4 indicates the number of partitions the dataset is divided into
Note, that to execute the code above you will have to change the path where the data is stored. The dataset can be downloaded from http://tomdrabas.com/data/VS14MORT.txt.gz
Schema
RDDs are schema-less data structures. (not constrained by columns or rows, and type of data)
data_heterogenous = sc.parallelize([('Ferrari', 'fast'), {'Porsche': 100000}, ['Spain','visited', 4504]]).collect()
data_heterogenous
[('Ferrari', 'fast'), {'Porsche': 100000}, ['Spain', 'visited', 4504]]
You can access the data in the object as you would normally do in Python.
data_heterogenous[1]['Porsche']
100000
Reading from files
When you read from a text file, each row from the file forms an element of an RDD.
data_from_file.take(1)
[' 1 2101 M1087 432311 4M4 2014U7CN I64 238 070 24 0111I64 01 I64 01 11 100 601']
User defined functions
You can create longer method to transform your data instead of using the lambda
expression.
def extractInformation(row):
import re
import numpy as np
selected_indices = [
2,4,5,6,7,9,10,11,12,13,14,15,16,17,18,
19,21,22,23,24,25,27,28,29,30,32,33,34,
36,37,38,39,40,41,42,43,44,45,46,47,48,
49,50,51,52,53,54,55,56,58,60,61,62,63,
64,65,66,67,68,69,70,71,72,73,74,75,76,
77,78,79,81,82,83,84,85,87,89
]
'''
Input record schema
schema: n-m (o) -- xxx
n - position from
m - position to
o - number of characters
xxx - description
1. 1-19 (19) -- reserved positions
2. 20 (1) -- resident status
3. 21-60 (40) -- reserved positions
4. 61-62 (2) -- education code (1989 revision)
5. 63 (1) -- education code (2003 revision)
6. 64 (1) -- education reporting flag
7. 65-66 (2) -- month of death
8. 67-68 (2) -- reserved positions
9. 69 (1) -- sex
10. 70 (1) -- age: 1-years, 2-months, 4-days, 5-hours, 6-minutes, 9-not stated
11. 71-73 (3) -- number of units (years, months etc)
12. 74 (1) -- age substitution flag (if the age reported in positions 70-74 is calculated using dates of birth and death)
13. 75-76 (2) -- age recoded into 52 categories
14. 77-78 (2) -- age recoded into 27 categories
15. 79-80 (2) -- age recoded into 12 categories
16. 81-82 (2) -- infant age recoded into 22 categories
17. 83 (1) -- place of death
18. 84 (1) -- marital status
19. 85 (1) -- day of the week of death
20. 86-101 (16) -- reserved positions
21. 102-105 (4) -- current year
22. 106 (1) -- injury at work
23. 107 (1) -- manner of death
24. 108 (1) -- manner of disposition
25. 109 (1) -- autopsy
26. 110-143 (34) -- reserved positions
27. 144 (1) -- activity code
28. 145 (1) -- place of injury
29. 146-149 (4) -- ICD code
30. 150-152 (3) -- 358 cause recode
31. 153 (1) -- reserved position
32. 154-156 (3) -- 113 cause recode
33. 157-159 (3) -- 130 infant cause recode
34. 160-161 (2) -- 39 cause recode
35. 162 (1) -- reserved position
36. 163-164 (2) -- number of entity-axis conditions
37-56. 165-304 (140) -- list of up to 20 conditions
57. 305-340 (36) -- reserved positions
58. 341-342 (2) -- number of record axis conditions
59. 343 (1) -- reserved position
60-79. 344-443 (100) -- record axis conditions
80. 444 (1) -- reserve position
81. 445-446 (2) -- race
82. 447 (1) -- bridged race flag
83. 448 (1) -- race imputation flag
84. 449 (1) -- race recode (3 categories)
85. 450 (1) -- race recode (5 categories)
86. 461-483 (33) -- reserved positions
87. 484-486 (3) -- Hispanic origin
88. 487 (1) -- reserved
89. 488 (1) -- Hispanic origin/race recode
'''
record_split = re\
.compile(
r'([\s]{19})([0-9]{1})([\s]{40})([0-9\s]{2})([0-9\s]{1})([0-9]{1})([0-9]{2})' +
r'([\s]{2})([FM]{1})([0-9]{1})([0-9]{3})([0-9\s]{1})([0-9]{2})([0-9]{2})' +
r'([0-9]{2})([0-9\s]{2})([0-9]{1})([SMWDU]{1})([0-9]{1})([\s]{16})([0-9]{4})' +
r'([YNU]{1})([0-9\s]{1})([BCOU]{1})([YNU]{1})([\s]{34})([0-9\s]{1})([0-9\s]{1})' +
r'([A-Z0-9\s]{4})([0-9]{3})([\s]{1})([0-9\s]{3})([0-9\s]{3})([0-9\s]{2})([\s]{1})' +
r'([0-9\s]{2})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([\s]{36})([A-Z0-9\s]{2})([\s]{1})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([\s]{1})([0-9\s]{2})([0-9\s]{1})' +
r'([0-9\s]{1})([0-9\s]{1})([0-9\s]{1})([\s]{33})([0-9\s]{3})([0-9\s]{1})([0-9\s]{1})')
try:
rs = np.array(record_split.split(row))[selected_indices]
except:
rs = np.array(['-99'] * len(selected_indices))
return rs
# return record_split.split(row)
Now, instead of using lambda
we will use the extractInformation(...)
method to split and convert our dataset.
data_from_file_conv = data_from_file.map(extractInformation)
data_from_file_conv.map(lambda row: row).take(1)
[array(['1', ' ', '2', '1', '01', 'M', '1', '087', ' ', '43', '23', '11',
' ', '4', 'M', '4', '2014', 'U', '7', 'C', 'N', ' ', ' ', 'I64 ',
'238', '070', ' ', '24', '01', '11I64 ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', '01',
'I64 ', ' ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ', '01', ' ',
' ', '1', '1', '100', '6'],
dtype='<U40')]
Working in cluster mode:
- A job is submitted to the master node (driver), through a DAG, the worker is assigned for the specific tasks.
- The master node provides the workers variables and methods for task executing.(each worker gets a copy)
Transformations
.map(…)
The method is applied to each element of the RDD: in the case for the data_from_file_conv
dataset you can think of this as a transformation of each row.
data_2014 = data_from_file_conv.map(lambda row: int(row[16]))
data_2014.take(10)
[2014, 2014, 2014, 2014, 2014, 2014, 2014, 2014, 2014, -99]
You can combine more columns.
data_2014_2 = data_from_file_conv.map(lambda row: (row[16], int(row[16])))
data_2014_2.take(10)
[('2014', 2014),
('2014', 2014),
('2014', 2014),
('2014', 2014),
('2014', 2014),
('2014', 2014),
('2014', 2014),
('2014', 2014),
('2014', 2014),
('-99', -99)]
.filter(…)
The .filter(...)
method allows you to select elements of your dataset that fit specified criteria.
data_filtered = data_from_file_conv.filter(lambda row: row[5] == 'F' and row[21] == '0')
data_filtered.count()
6
.flatMap(…)
The .flatMap(...)
method works similarly to .map(...)
but returns a flattened results instead of a list.
data_2014_flat = data_from_file_conv.flatMap(lambda row: (row[16], int(row[16]) + 1))
data_2014_flat.take(10)
['2014', 2015, '2014', 2015, '2014', 2015, '2014', 2015, '2014', 2015]
.distinct()
This method returns a list of distinct values in a specified column.
distinct_gender = data_from_file_conv.map(lambda row: row[5]).distinct().collect()
distinct_gender
['-99', 'M', 'F']
.sample(…)
The .sample()
method returns a randomized sample from the dataset.
fraction = 0.1
data_sample = data_from_file_conv.sample(False, fraction, 666)
data_sample.take(1)
[array(['1', ' ', '5', '1', '01', 'F', '1', '082', ' ', '42', '22', '10',
' ', '4', 'W', '5', '2014', 'U', '7', 'C', 'N', ' ', ' ', 'I251',
'215', '063', ' ', '21', '02', '11I350 ', '21I251 ', ' ',
' ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', '02',
'I251 ', 'I350 ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ', '28', ' ',
' ', '2', '4', '100', '8'],
dtype='<U40')]
Let’s confirm that we really got 10% of all the records.
print('Original dataset: {0}, sample: {1}'.format(data_from_file_conv.count(), data_sample.count()))
Original dataset: 2631171, sample: 263247
.leftOuterJoin(…)
Left outer join, just like the SQL world, joins two RDDs based on the values found in both datasets, and returns records from the left RDD with records from the right one appended where the two RDDs match.
rdd1 = sc.parallelize([('a', 1), ('b', 4), ('c',10)])
rdd2 = sc.parallelize([('a', 4), ('a', 1), ('b', '6'), ('d', 15)])
rdd3 = rdd1.leftOuterJoin(rdd2)
rdd3.take(5)
[('c', (10, None)), ('b', (4, '6')), ('a', (1, 4)), ('a', (1, 1))]
If we used .join(...)
method instead we would have gotten only the values for 'a'
and 'b'
as these two values intersect between these two RDDs.
rdd4 = rdd1.join(rdd2)
rdd4.collect()
[('b', (4, '6')), ('a', (1, 4)), ('a', (1, 1))]
Another useful method is the .intersection(...)
that returns the records that are equal in both RDDs.
rdd5 = rdd1.intersection(rdd2)
rdd5.collect()
[('a', 1)]
.repartition(…)
Repartitioning the dataset changes the number of partitions the dataset is divided into.
rdd1 = rdd1.repartition(4)
len(rdd1.glom().collect())
4
Actions
.take(…)
The method returns n
top rows from a single data partition.
data_first = data_from_file_conv.take(1)
data_first
[array(['1', ' ', '2', '1', '01', 'M', '1', '087', ' ', '43', '23', '11',
' ', '4', 'M', '4', '2014', 'U', '7', 'C', 'N', ' ', ' ', 'I64 ',
'238', '070', ' ', '24', '01', '11I64 ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', '01',
'I64 ', ' ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ', '01', ' ',
' ', '1', '1', '100', '6'],
dtype='<U40')]
If you want somewhat randomized records you can use .takeSample(...)
instead.
data_take_sampled = data_from_file_conv.takeSample(False, 1, 667)
data_take_sampled
[array(['2', '17', ' ', '0', '08', 'M', '1', '069', ' ', '39', '19', '09',
' ', '1', 'M', '7', '2014', 'U', '7', 'U', 'N', ' ', ' ', 'I251',
'215', '063', ' ', '21', '06', '11I500 ', '21I251 ', '61I499 ',
'62I10 ', '63N189 ', '64K761 ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', '05',
'I251 ', 'I120 ', 'I499 ', 'I500 ', 'K761 ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ', ' ',
' ', ' ', ' ', ' ', ' ', ' ', '01', ' ',
' ', '1', '1', '100', '6'],
dtype='<U40')]
.reduce(…)
Another action that processes your data, the .reduce(...)
method reduces the elements of an RDD using a specified method.
rdd1.map(lambda row: row[1]).reduce(lambda x, y: x + y)
15
If the reducing function is not associative and commutative you will sometimes get wrong results depending how your data is partitioned.
data_reduce = sc.parallelize([1, 2, .5, .1, 5, .2], 1)
I we were to reduce the data in a manner that we would like to divide the current result by the subsequent one, we would expect a value of 10
works = data_reduce.reduce(lambda x, y: x / y)
works
10.0
However, if you were to partition the data into 3 partitions, the result will be wrong.
data_reduce = sc.parallelize([1, 2, .5, .1, 5, .2], 3)
data_reduce.reduce(lambda x, y: x / y)
0.004
The .reduceByKey(...)
method works in a similar way to the .reduce(...)
method but performs a reduction on a key-by-key basis.
data_key = sc.parallelize([('a', 4),('b', 3),('c', 2),('a', 8),('d', 2),('b', 1),('d', 3)],4)
data_key.reduceByKey(lambda x, y: x + y).collect()
[('b', 4), ('c', 2), ('a', 12), ('d', 5)]
.count()
The .count()
method counts the number of elements in the RDD.
data_reduce.count()
6
It has the same effect as the method below but does not require shifting the data to the driver.
len(data_reduce.collect()) # WRONG -- DON'T DO THIS!
6
If your dataset is in a form of a key-value you can use the .countByKey()
method to get the counts of distinct keys.
data_key.countByKey().items()
dict_items([('a', 2), ('b', 2), ('d', 2), ('c', 1)])
.saveAsTextFile(…)
As the name suggests, the .saveAsTextFile()
the RDD and saves it to text files: each partition to a separate file.
data_key.saveAsTextFile('/Users/drabast/Documents/PySpark_Data/data_key.txt')
To read it back, you need to parse it back as, as before, all the rows are treated as strings.
def parseInput(row):
import re
pattern = re.compile(r'\(\'([a-z])\', ([0-9])\)')
row_split = pattern.split(row)
return (row_split[1], int(row_split[2]))
data_key_reread = sc \
.textFile('/Users/drabast/Documents/PySpark_Data/data_key.txt') \
.map(parseInput)
data_key_reread.collect()
[('a', 4), ('b', 3), ('c', 2), ('a', 8), ('d', 2), ('b', 1), ('d', 3)]
.foreach(…)
A method that applies the same function to each element of the RDD in an iterative way.
def f(x):
print(x)
data_key.foreach(f)