ML Workbench for an Existing Jupyter Server

If you have a running Jupyter lab server or you are using the Jupyter lab on Microsoft Azure, AWS Sagemaker, or Google Vertex AI, you can still take advantage of our ML Workbench by installing its Jupyter lab extension and/or the Python package.

You can choose to install on the base kernel or using a Conda environment.

Base kernel installation

From JupyterLab, open a new terminal and run the following command to install pyTigerGraph:

pip install pyTigerGraph[gds]

Conda environment installation

  1. Run the following command to install our Python environment:

    • CPU

    • GPU

    conda env create -f
    conda env create -f
  2. Run source activate tigergraph-torch-cpu or source activate tigergraph-torch-gpu to activate your environment depending on if you installed a CPU or GPU environment.

  3. Run the following command to install the Python kernel:

    • CPU

    • GPU

    python -m ipykernel install --user --name tigergraph-torch-cpu --display-name "TigerGraph Pytorch (cpu)"
    python -m ipykernel install --user --name tigergraph-torch-gpu --display-name "TigerGraph Pytorch (gpu)"

JupyterLab extension

Open a new terminal and run the following command to install the JupyterLab extension:

pip install tigergraph-mlworkbench

The JupyterLab extension only works on JupyterLab 3.0 or above.

You can check that the extension has been installed by running the command pip show tigergraph-mlworkbench.

If the TigerGraph sidebar icon in JupyterLab doesn’t appear after installing the extension, make sure it was installed to the base kernel instead of to an environment. Refreshing the browser after installation may be necessary to see the icon.

Next steps

The next step after installation is activation.

After installation, go to our Tutorials and Sample Data section.