Tuesday, January 15, 2019

Serverless Model Serving: OpenWhisk, Apache Spark and MLeap

A few years ago I wrote a blog about Livy, and it's somewhat designed to solve this problem. I could, for example, create a generic spark job that takes parameters over HTTP and returns the response. However, there are a few limitations to this approach. There is overhead for submitting Spark jobs, and there is some finite capacity with a Spark cluster. Using resources that could be spent crunching numbers or fitting models is a much better use than jobs that calculate values for one-off HTTP requests.

I went searching on an alternative way to solve this problem and eventually came across MLeap. MLeap is a project that, among other things uses models fit in MLlib and serializes it as a zip file, JSON or Binary for reuse in a Scala or Python API. So instead of running Spark jobs to utilize the 700 TB model, you can write a Play server in Scala or a Flask (Or Django, I guess) one for Python. Once I discovered this, the gears started to turn, and I wondered if I could take this a step further and deploy a Mleap model as a serverless function.

Around this same time, I started to use OpenWhisk for a project, and it clicked that I could (probably) run my MLeap model using a custom container on OpenWhisk. This blog post serves as a how-to for serving an MLLeap model as a serverless function on OpenWhisk from creating the custom container to deploying it on IBM Cloud Functions. Toward the end of the blog, I get into the limitations of doing this and offer some alternatives.


DataTau published first on DataTau

No comments:

Post a Comment