Skip to end of banner
Go to start of banner

Remote Python Executor (RPE) Installation

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

On this Page

Overview

Remote Python Script Snap has the ability to execute a Python script natively on local/remote Python executors. Remote Python Executor (RPE) can be installed either on the same node as the Snaplex or a remote node which is accessible by the Snaplex.

Steps

  1. Establish SSH connection to the node.
  2. Install Docker CE.
  3. Create and start RPE container.

SSH to the Node

Establish SSH connection to the node where you want to install the RPE. Root access may be required for the latter steps.

Install Docker

Follow the instructions on the official Docker site to install Docker CE on your machine. Click here for Ubuntu, here for CentOS, and here for Windows. If using Amazon Linux AMI follow the instructions here.

Start RPE

Docker Images

Docker container is an instance created from Docker image. We provide RPE as Docker images which can be found here. Currently, there are 4 tags (types) available.

  1. min: the minimal version for CPU instances.
  2. ds: the data science version for CPU instances containing recommended libraries for data science.
  3. min-cuda9-cudnn7: the minimal version for GPU instances.
  4. ds-cuda9-cudnn7: the data science version for GPU instances containing recommended libraries for data science.


Below is the list of Python libraries in the ds and ds-cuda9-cudnn7 tags. 

Library
Version
simplejson3.16.0
requests2.20.0
jsonpickle1.0
python-dateutil2.7.4
more-itertools4.3.0
pydub0.23.0
numpy1.15.3
scipy1.1.0
scikit-learn0.20.0
xgboost0.80
lightgbm2.2.1
pillow5.3.0
bokeh0.13.0
pandas0.23.4
tensorflow1.5.0
keras2.2.4
nltk3.3
textblob0.15.1

ds-cuda9-cudnn7 has tensorflow-gpu==1.11.0 instead of tensorflow==1.5.0.

CPU Instance

Execute the following command to start the RPE container data science version:

Execute the following command to start the RPE container minimal version:


GPU Instance

Execute the following command to start the RPE container data science version for GPU instance:


Execute the following command to start the RPE container minimal version for GPU instance:

Options

See below for explanation of each option:

Option
Format
Description
--memory-swap="-1" --restart=always--memory-swap="<memory_swap_limit>" --restart=alwaysThe container can use unlimited space for memory swapping. See Memory Swap for more information.

"–restart=always" automatically restarts the docker container if the machine restarts.
-p 5301:5301-p <host_port>:5301The RPE is accessible from host port which is 5301 by default. You may change the host port to support multiple containers on the same node.
-e "REMOTE_PYTHON_EXECUTOR_TOKEN="-e "REMOTE_PYTHON_EXECUTOR_TOKEN=<token>"The default token is empty. Strong token is recommended.
-v /opt/remote_python_executor_log/:/opt/remote_python_executor_log/-v <log_dir>/opt/remote_python_executor_log/The log is mounted to /opt/remote_python_executor_log/by default. This location can be changed.
--name=rpe--name=<container_name>The container's name which can be changed in case of multiple containers on the same node.

Manage RPE

Since the RPE starts with the container, use the following commands to start, stop, restart and remove the RPE.

Custom Image

The custom RPE package can be download here. This package contains Dockerfile and others. You can modify the Dockerfile and add required libraries in requirements.txt. Then, executing the following command to build the Docker image (make sure to cd into the directory containing Dockerfile).

Once the image is built, execute the following command to create and start the custom RPE container.

  • No labels