
Examining the ratings counter script
You should be able to get the ratings-counter.py file from the download package for this book. When you've downloaded the ratings counter script to your computer, copy it to your SparkCourse directory:

Once you have it there, if you've installed Enthought Canopy or your preferred Python development environment, you should be able to just double-click on that file, and up comes Canopy or your Python development environment:
from pyspark import SparkConf, SparkContext
import collections
conf = SparkConf().setMaster("local").setAppName("RatingsHistogram")
sc = SparkContext(conf = conf)
lines = sc.textFile("file:///SparkCourse/ml-100k/u.data")
ratings = lines.map(lambda x: x.split()[2])
result = ratings.countByValue()
sortedResults = collections.OrderedDict(sorted(result.items()))
for key, value in sortedResults.items():
print("%s %i" % (key, value))
Now I don't want to get into too much detail as to what's actually going on in the script yet. I just want you to get a little bit of a payoff for all the work you've done so far. However, if you look at the script, it's actually not that hard to figure out. We're just importing the Spark stuff that we need for Python here:
from pyspark import SparkConf, SparkContext
import collections
Next, we're doing some configurations and some set up of Spark:
conf = SparkConf().setMaster("local").setAppName("RatingsHistogram")
sc = SparkContext(conf = conf)
In the next line, we're going to load the u.data file from the MovieLens dataset that we just installed, so that is the file that contains all of the 100,000 movie ratings:
lines = sc.textFile("file:////SparkCourse/m1-100k/u.data")
We then parse that data into different fields:
ratings = lines.map(lambda x: x.sploit()[2])
Then we call a little function in Spark called countByValue that will actually split up that data for us:
result = ratings.countByValue()
What we're trying to do is create a histogram of our ratings data. So we want to find out, of all those 100,000 ratings, how many five star ratings there are, how many four star, how many three star, how many two star and how many one star. Back at the time that they made this dataset, they didn't have half star ratings; there was only one, two, three, four, and five stars, so those are the choices. What we're going to do is count up how many of each ratings type exists in that dataset. When we're done, we're just going to sort the results and print them out in these lines of code:
sortedResults = collections.OrderedDict(sorted(result.items()))
for key, value in sortedResults.items():
print("%s %i" % (key, value))
So that's all that's going on in this script. So let's go ahead and run that, and see if it works.