Matt Matt - 4 months ago 20x
Python Question

Access to Spark from Flask app

I wrote a simple Flask app to pass some data to Spark. The script works in IPython Notebook, but not when I try to run it in it's own server. I don't think that the Spark context is running within the script. How do I get Spark working in the following example?

from flask import Flask, request
from pyspark import SparkConf, SparkContext

app = Flask(__name__)

conf = SparkConf()
conf.set("spark.executor.memory", "1g")
sc = SparkContext(conf=conf)

@app.route('/accessFunction', methods=['POST'])
def toyFunction():
posted_data = sc.parallelize([request.get_data()])
return str(posted_data.collect()[0])

if __name__ == '__main_':

In IPython Notebook I don't define the
because it is automatically configured. I don't remember how I did this, I followed some blogs.

On the Linux server I have set the .py to always be running and installed the latest Spark by following up to step 5 of this guide.


Following the advice by davidism I have now instead resorted to simple programs with increasing complexity to localise the error.

Firstly I created .py with just the script from the answer below (after appropriately adjusting the links):

import sys
from pyspark import context
print ("Successfully imported Spark Modules")
except ImportError as e:
print ("Can not import Spark Modules", e)

This returns "Successfully imported Spark Modules". However, the next .py file I made returns an exception:

from pyspark import SparkContext
sc = SparkContext('local')
rdd = sc.parallelize([0])
print rdd.count()

This returns exception:

"Java gateway process exited before sending the driver its port number"

Searching around for similar problems I found this page but when I run this code nothing happens, no print on the console and no error messages. Similarly, this did not help either, I get the same Java gateway exception as above. I have also installed anaconda as I heard this may help unite python and java, again no success...

Any suggestions about what to try next? I am at a loss.


Okay, so I'm going to answer my own question in the hope that someone out there won't suffer the same days of frustration! It turns out it was a combination of missing code and bad set up.

Editing the code: I did indeed need to initialise a Spark Context by appending the following in the preamble of my code:

from pyspark import SparkContext
sc = SparkContext('local')

So the full code will be:

from pyspark import SparkContext
sc = SparkContext('local')

from flask import Flask, request
app = Flask(__name__)

@app.route('/whateverYouWant', methods=['POST'])  #can set first param to '/'

def toyFunction():
    posted_data = sc.parallelize([request.get_data()])
    return str(posted_data.collect()[0])

if __name__ == '__main_':    #note set to 8080!

Editing the setup: It is essential that the file ( is in the correct directory, namely it must be saved to the folder /home/ubuntu/spark-1.5.0-bin-hadoop2.6.

Then issue the following command within the directory:


which initiates the service at 10.0.0.XX:8080/accessFunction/ .

Note that the port must be set to 8080 or 8081: Spark only allows web UI for these ports by default for master and worker respectively

You can test out the service with a restful service or by opening up a new terminal and sending POST requests with cURL commands:

curl --data "DATA YOU WANT TO POST" http://10.0.0.XX/8080/accessFunction/