WUR | WEC | CSA | PPS | GRS

1 Introduction

The keras3 R package actually runs on the keras Python package under the hood, which requires an installed backend of Python and tensorflow of the correct versions required by keras3, preferably in its own virtual environment. This can be quite involved to setup on your own system, because often there is already some installed version of Python floating around on your system, which can interfere with the required Python version by the keras3 R package.

2 Docker installation

To prevent these installation issues, we provide you with a containerized implementation of RStudio Server, where R 4.4.3 with the tidyverse, keras3, imager and abind packages are already pre-installed. In this container image, the required Python and tensorflow backend is also already included and correctly setup to work with keras3 in R, so after initializing this container image you should be able to work with keras3 in R from the get-go.

Download container image

To achieve this, start by downloading the container image we developed. This is a file of 2GB, so this can download in the background while you continue with the rest of these instructions.

This container can be run through Docker, which simulates a virtual Linux system for each container where only the software that is installed inside the container is available. By default the Docker containers also have no access to the files on your own system, so you can consider it a completely separate system within your own system (each container even has its own unique IP address!). Docker can run on all three major operating systems (Linux, Windows and macOS), which makes it easy to deploy on many different systems (such as all the different laptops of DSE students). For more information about Docker, see: https://en.wikipedia.org/wiki/Docker_(software)

Download, install and start Docker Desktop

To run Docker containers, we first need to install the Docker Engine. This can most easily be achieved by downloading Docker Desktop. Download the Docker Desktop executable that is specific for your own OS (scroll a bit down to the download section and ignore the pricing button, Docker is free for personal use). For Windows laptops you need the AMD64 version of Windows (if you have an ARM CPU then you are probably very much aware of this already, only in that case you need the ARM64 version). For the newer MacBooks with the more modern CPU chips you should download the Apple Silicon version, for older models with an Intel CPU you should download the Intel version. You can check which processor/CPU you have by clicking on the Apple menu button in the top bar of your MacBook and then on the “About this Mac” in the drop-down menu.

After having downloaded the correct version of Docker Desktop for your system, use this executable to install Docker Desktop on your own system.

Finally, when you’ve successfully installed Docker Desktop, launch Docker Desktop. Make sure to accept the terms in order to use the software. If it asks if you want to apply to default settings, then agree with that as well. After that, you can skip the sign-in and tutorial windows that pop up and then just leave Docker Desktop running in the background.

While Docker Desktop is open, the Docker Engine is also active. When you close Docker Desktop (for example after rebooting your system) then the Docker Engine is also disabled and you can not run Docker containers anymore until you reopen Docker Desktop. This is convenient, as this way you prevent that your Docker containers are running by default and are eating up your system’s resources all the time.

At this point your Docker Engine should be running, but you don’t have any container installed and running yet. Check if your system has already finished downloading the container image from the first task of this page. If not, then you can grab a coffee or something while we wait for it to finish. Otherwise, let’s launch the container.

Launch docker container

When you are using Windows, open Command Prompt from the start menu with Administrator privileges (right mouse click the Command Prompt executable and click “run as administrator”). When you are using MacOS, then open the Terminal application.

Now locate where you have downloaded the container image and copy the path to this file. In my case I downloaded it to the Downloads folder on my Windows laptop, which makes the path to the file: C:/Users/jasper/Downloads/rstudio-tidyverse-keras3.tar. Take care that you use forward slashes in the path (/) and not backslashes (\).

Now load the image into Docker using the command line. So in the Command Prompt or Terminal window type the following:

docker load -i "[PATH-TO-YOUR-IMAGE-FILE]"

Make sure to replace the part in between the quotes with the path to the image on your own system. So in my case this would be:

docker load -i "C:/Users/jasper/Downloads/rstudio-tidyverse-keras3.tar"

For Mac users it may be easier to first navigate within Terminal to the folder where you have saved the container image file. So for example when you have saved the file in Downloads, you can run the following two lines in Terminal on your Mac to first navigate to Downloads and then load the image from within that directory:

cd Downloads
docker load -i rstudio-tidyverse-keras3.tar

This loading may take a while to complete. Wait until you can type on a new line within the terminal, when nothing has appeared underneath your command then it is still busy. After this is done you can check whether docker has loaded the image successfully by typing:

docker image ls

This should now display the rstudio-tidyverse-keras3 image we have just loaded.

Now we can actually run the docker container from this image. First locate the path to the working directory where you intend to save your scripts and files for this day. We have to bind this directory to the docker container, because by default the container only has access to the directories you specifically bind to the container. After you have located the path to this directory, run the following in the command line:

docker run -d -e DISABLE_AUTH=true -p 127.0.0.1:33806:8787 -v "[PATH-TO-YOUR-WORKING-DIRECTORY]":/home/rstudio/workspace rstudio-tidyverse-keras3:4.4.3

Make sure to replace the [PATH-TO-YOUR-WORKING-DIRECTORY] with your actual path. The rest should remain unchanged. In my case this is:

docker run -d -e DISABLE_AUTH=true -p 127.0.0.1:33806:8787 -v "D:/Projects/Data Science for Ecology/2025/week 4/day 3":/home/rstudio/workspace rstudio-tidyverse-keras3:4.4.3
Do not use a USB-drive for your working directory!
Only use your internal drive (e.g. C:/)!

After this is done you can check whether docker has successfully launched your container by typing:

docker container ls

Now you should be able to run RStudio Server in your browser by opening the link http://127.0.0.1:33806 in your browser, which basically functions as a website that you host locally and is only accessible by you on your own system while this Docker container is running. This “website” is a fully functional RStudio environment in which you will work for the rest of this assignment. Also for the group assignment during the final two weeks of this course you can use this RStudio Server to be able to do image analyses and run Artificial Neural Networks in R. If you do, then remember to first open Docker Desktop again and run the docker run ... command again via the command line to launch the container.

In this RStudio Server the tidyverse, keras3, imager and abind packages are already pre-installed, so you should not install these again. However, you do need to load these packages again when starting your script (e.g. via library(keras3), just as you do in your regular RStudio environment.

Remember that this docker container only has access to the working directory you provided during the setup (in my case "D:/Projects/Data Science for Ecology/2025/week 4/day 3") and it can not read or save files outside of that directory. Inside the docker container this working directory is called "/home/rstudio/workspace". You can check this by running getwd() in your RStudio Server. This is correct, as we have bound our own working directory to the working directory of the docker container when setting it up. This means that if you save a file called script.r in your working directory, then the docker container the path to this file is "/home/rstudio/workspace/script.r", but for example on my own system it will be located at "D:/Projects/Data Science for Ecology/2025/week 4/day 3/script.r". This also means that you cannot change the working directory of your RStudio Server, because it only has access to this working directory and its subfolders. Just something to keep in mind.

Local installation

If you did not manage to setup the RStudio Server docker container then you could try installing the keras3 R package directly on your own system within the RStudio environment you have been working in for the rest of the course as well. For this you also need to install the Python and tensorflow backend, which has quite a substantial chance of failing. So before you decide on trying this, first ask help from one of the practical supervisors to help you with the Docker installation process. Only try installing keras3 locally when you have reached a dead end with Docker.

To install and use the keras3 package within R, you need to install the package and after that run a special install function from within R. It is best to do this in a clean R session, so outside of a script you are currently working in. So first install the keras3 package and load it:

install.packages("keras3")
library(keras3)

After that you need to install the Python backend, you can do this from within R by running the following:

reticulate::install_python(version = "3.11.9")

Then finally you install the Python keras implementation with the tensorflow backend. Do this from within R using:

keras3::install_keras(backend = "tensorflow",
                      python_version = "3.11.9")

After keras3::install_keras() has finished successfully R will likely reboot. After it has rebooted you can load the keras3 package again and check whether it has installed the entire backend succesfully using the following code:

library(keras3)
reticulate::py_config()

If it went alright it will show you that it is using Python 3.11.9 and load all packages from a virtual environment (most likely "r-keras"). From within the docker container with a successfull keras3 installation it displays the following, which should look similar in your case if it completed successfully:

python:         /home/rstudio/.virtualenvs/r-keras/bin/python
libpython:      /home/rstudio/.pyenv/versions/3.11.9/lib/libpython3.11.so
pythonhome:     /home/rstudio/.virtualenvs/r-keras:/home/rstudio/.virtualenvs/r-keras
version:        3.11.9 (main, Mar 25 2025, 08:14:09) [GCC 13.3.0]
numpy:          /home/rstudio/.virtualenvs/r-keras/lib/python3.11/site-packages/numpy
numpy_version:  1.26.4
keras:          /home/rstudio/.virtualenvs/r-keras/lib/python3.11/site-packages/keras

NOTE: Python version was forced by import("keras")

To test if everything works as it is supposed to, try loading a keras3 dataset that is included in the package with this code. If this works without giving an error, then most likely your installation was successful.

mnist <- dataset_mnist()

From now on you can use keras3 by solely loading the package with library(keras3). The installation script was one-time only occasion.

Plan C (or Z actually)

If all of the above failed for you, then our advice is to work together with a neighbour with a successful installation of keras3.