The Recommendation API is developed as a tomcat webapp inside the platform repository:

bitbucket.org/mico-project/platform/src/HEAD/showcase-webapp/.
The deployment can be done automatically using maven, providing the respective MICO instance
in the pom.xml. More details regarding the configuration can be found here:

Having built a *.war file, a manual upload to the tomcat manager webapp is possible as well: The showcase-webapp.war must be placed within the directory /var/lib/tomcat7/webapps/. The tomcat server detects and deploys the file automatically (startup usually takes two minutes).

For collaborative filtering applications, prediction.io must be installed on the platform. The suggested way is to install prediction.io within a docker container to decouple its dependencies from the platform and make the installation process reproducible.

After that, the recommendation API is available as described in the API documentation.

Running Prediction.io using a prepared MICO Dockerfile

Docker accepts git repositories instead of local files as references to Dockerfile & Co. This allows us to
store the Dockerfile inside bitbucket for testing.
You have to adjust the path accordingly (e.g., change the username), see the Docker docs for syntax details.

During the build phase, all of the prediction.io requirements (Spark, elasticsearch…) are automatically downloaded and configured inside the docker container.
For public releases,  this can be published on public repositories, making this adjustment step only
required for development:

$ docker build --tag="wp5" https://user@bitbucket.org/mico-project/recommendation.git#greenpeace:PredictionIO/docker
$ docker run -p 8000:8000 -p 7070:7070 -p 9000:9000 wp5

 

The Engine is deployed to port 8000 and can be used by the demo interface or showcase partners. Check
http://localhost:8000 for the prediction.io overview page.