Common Questions & Answers for the LightOn Cloud

Q: Why do I have to request access to use the LightOn Cloud service?

A: During the early release stage of LightOn Cloud, we are trying to make our service available to as many users as possible, without at the same time hindering their ability to find an available time-slot with relative ease. As more OPUs are added to the LightOn Cloud we will be able to satisfy more access requests.


Q: Does LightOn Cloud support PyTorch and Scikit-learn?

A: Yes, our API offers a scikit-learn interface, as well as a PyTorch one. To see how you can use them you can check out the examples.


Q: Do you have documentation for your API somewhere?

A: Yes, we do. It is here.


Q: How is the IP treated for algorithms running on LightOn Cloud?

A: Any algorithm running on LightOn Cloud is the property of the user. LightOn makes no claim whatsoever on the IP generated by users while using LightOn Cloud. For details please refer to the LightOn Cloud Terms & Conditions


Q: Why are my data deleted after each booking?

A: Please keep in mind that only the data folder ( ~/data ) is persistent across bookings. Everything else is erased at the end of your booking. For example, simulation results should go in the data folder.


Q: Is it as easy as changing from CPU to GPU?

A: With LightOn Cloud you have access to a CPU, a GPU and LightOn’s OPU. A simple function call allows computations to be performed on LightOn OPU.


Q: Is it possible to combine GPU and OPU calculations?

A: One can combine GPU and OPU computation in the same way as GPU and CPU: by running a part of the computations on the GPU and another part on the OPU. Note that the operation performed by the OPU is not differentiable.


Q: What sort of computation can be performed using the OPUs?

A: Examples and papers using LightOn’s OPU are listed in the Deeper Insight section of « Our Technology ».


Q: Is a JupyterLab interface provided?

A: LOC provides a Jupyter Lab interface: before the start of each booking, you will receive a link at your email. You can install any extension to it and you are able to set extension installation at each boot to automate the process. Please note that if you are connected with SSH you can find the Jupyter Lab link with the following command
grep "NotebookApp.token =" /etc/jupyter/jupyter_notebook_config.py


Q: What about Jupyter Notebook?

A: You can launch a classic Jupyter Notebook from the Jupyter Lab’s help menu if you like.




Q: What is the procedure for creating Virtualenvs and adding them as kernels for the Jupyterlab?

A:

  1. create venv with python -m venv myenv
  2. activate with source myenv/bin/activate
  3. pip install ipykernel
  4. python -m ipykernel install --user --name=myenv


Q: How can I connect using SSH?

A: First of all, it is necessary to provide you public SSH key, when asked, after your 1st connection on the booking system. If you haven’t done so, please send an email to support[at]lighton.ai.
Having provided your SSH key you can use:
SSH username@[LOCO].cloud.lighton.ai
(e.g. SSH “username”@a.cloud.lighton.ai, if you have booked SATURN A)


Q: Why am I getting SSH connection issues warnings?


A: The good news is that it is expected! It is caused by the server being re-installed at each booking. To disable strict host checking, you may add this in .ssh/config :

Host ?.cloud.lighton.ai
    # Disable host checking for LightOn cloud
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null

Host *
   # general options go here



Q: Why is sudo not working?

A: For “root” rights you have to use sudo -i or sudo cmd (su - doesn’t work).


Q: How can I work with HDF5 files? / I am getting the following error: OSError: Unable to open file (unable to lock file, errno = 37, error message = ‘No locks available’)

A: This issue is due to the fact that the storage space is a network filesystem that doesn’t support file locking required by HDF5. To avoid thie problem you have to set the HDF5_USE_FILE_LOCKING environment variable to FALSE.
If you work in ssh, just execute export HDF5_USE_FILE_LOCKING=FALSE in your bash session.
If you work in Jupyter notebook, put the line `%env HDF5_USE_FILE_LOCKING=FALSE``` at the beginning of your notebook.