diff --git a/Dockerfile.aarch64 b/Dockerfile.aarch64
index dd79cb3..65462e5 100644
--- a/Dockerfile.aarch64
+++ b/Dockerfile.aarch64
@@ -25,6 +25,8 @@ RUN \
jellyfin-ffmpeg \
libfontconfig1 \
libfreetype6 \
+ libomxil-bellagio0 \
+ libomxil-bellagio-bin \
libssl1.0.0 && \
echo "**** install jellyfin *****" && \
if [ -z ${JELLYFIN_RELEASE+x} ]; then \
diff --git a/Dockerfile.armhf b/Dockerfile.armhf
index e97bc15..dddd603 100644
--- a/Dockerfile.armhf
+++ b/Dockerfile.armhf
@@ -27,6 +27,8 @@ RUN \
jellyfin-ffmpeg \
libfontconfig1 \
libfreetype6 \
+ libomxil-bellagio0 \
+ libomxil-bellagio-bin \
libraspberrypi0 \
libssl1.0.0 && \
echo "**** install jellyfin *****" && \
diff --git a/README.md b/README.md
index e21e7df..c320897 100644
--- a/README.md
+++ b/README.md
@@ -74,7 +74,9 @@ docker create \
-v :/data/tvshows \
-v :/data/movies \
-v :/transcode `#optional` \
+ -v /opt/vc/lib:/opt/vc/lib `#optional` \
--device /dev/dri:/dev/dri `#optional` \
+ --device /dev/vchiq:/dev/vchiq `#optional` \
--restart unless-stopped \
linuxserver/jellyfin
```
@@ -101,11 +103,13 @@ services:
- :/data/tvshows
- :/data/movies
- :/transcode #optional
+ - /opt/vc/lib:/opt/vc/lib #optional
ports:
- 8096:8096
- 8920:8920 #optional
devices:
- /dev/dri:/dev/dri #optional
+ - /dev/vchiq:/dev/vchiq #optional
restart: unless-stopped
```
@@ -125,7 +129,9 @@ Container images are configured using parameters passed at runtime (such as thos
| `-v /data/tvshows` | Media goes here. Add as many as needed e.g. `/data/movies`, `/data/tv`, etc. |
| `-v /data/movies` | Media goes here. Add as many as needed e.g. `/data/movies`, `/data/tv`, etc. |
| `-v /transcode` | Path for transcoding folder, *optional*. |
+| `-v /opt/vc/lib` | Path for Raspberry Pi OpenMAX libs *optional*. |
| `--device /dev/dri` | Only needed if you want to use your Intel GPU for hardware accelerated video encoding (vaapi). |
+| `--device /dev/vchiq` | Only needed if you want to use your Raspberry Pi OpenMax video encoding (Bellagio). |
## Environment variables from files (Docker secrets)
@@ -160,18 +166,33 @@ Webui can be found at `http://:8096`
More information can be found in their official documentation [here](https://github.com/MediaBrowser/Wiki/wiki) .
+## Hardware Acceleration
+
+### Intel
+
Hardware acceleration users for Intel Quicksync will need to mount their /dev/dri video device inside of the container by passing the following command when running or creating the container:
```--device=/dev/dri:/dev/dri```
We will automatically ensure the abc user inside of the container has the proper permissions to access this device.
+### Nvidia
+
Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here:
https://github.com/NVIDIA/nvidia-docker
We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime `--runtime=nvidia` and add an environment variable `-e NVIDIA_VISIBLE_DEVICES=all` (can also be set to a specific gpu's UUID, this can be discovered by running `nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv` ). NVIDIA automatically mounts the GPU and drivers from your host into the jellyfin docker container.
+### OpenMAX (Raspberry Pi)
+
+Hardware acceleration users for Raspberry Pi OpenMAX will need to mount their /dev/vchiq video device inside of the container and their system OpenMax libs by passing the following options when running or creating the container:
+
+```
+--device=/dev/vchiq:/dev/vchiq
+-v /opt/vc/lib:/opt/vc/lib
+```
+
## Support Info
@@ -238,6 +259,7 @@ Once registered you can define the dockerfile to use with `-f Dockerfile.aarch64
## Versions
+* **09.01.20:** - Add Pi OpenMax support.
* **02.10.19:** - Improve permission fixing for render & dvb devices.
* **31.07.19:** - Add AMD drivers for vaapi support on x86.
* **13.06.19:** - Add Intel drivers for vaapi support on x86.
diff --git a/readme-vars.yml b/readme-vars.yml
index 1e0bd59..0f6d8ed 100644
--- a/readme-vars.yml
+++ b/readme-vars.yml
@@ -34,9 +34,11 @@ opt_param_env_vars:
opt_param_usage_include_vols: true
opt_param_volumes:
- { vol_path: "/transcode", vol_host_path: "", desc: "Path for transcoding folder, *optional*." }
+ - { vol_path: "/opt/vc/lib", vol_host_path: "/opt/vc/lib", desc: "Path for Raspberry Pi OpenMAX libs *optional*." }
opt_param_device_map: true
opt_param_devices:
- { device_path: "/dev/dri", device_host_path: "/dev/dri", desc: "Only needed if you want to use your Intel GPU for hardware accelerated video encoding (vaapi)." }
+ - { device_path: "/dev/vchiq", device_host_path: "/dev/vchiq", desc: "Only needed if you want to use your Raspberry Pi OpenMax video encoding (Bellagio)." }
opt_param_usage_include_ports: true
opt_param_ports:
- { external_port: "8920", internal_port: "8920", port_desc: "Https webUI (you need to setup your own certificate)." }
@@ -49,20 +51,36 @@ app_setup_block: |
More information can be found in their official documentation [here](https://github.com/MediaBrowser/Wiki/wiki) .
+ ## Hardware Acceleration
+
+ ### Intel
+
Hardware acceleration users for Intel Quicksync will need to mount their /dev/dri video device inside of the container by passing the following command when running or creating the container:
```--device=/dev/dri:/dev/dri```
We will automatically ensure the abc user inside of the container has the proper permissions to access this device.
-
+
+ ### Nvidia
+
Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here:
https://github.com/NVIDIA/nvidia-docker
We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime `--runtime=nvidia` and add an environment variable `-e NVIDIA_VISIBLE_DEVICES=all` (can also be set to a specific gpu's UUID, this can be discovered by running `nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv` ). NVIDIA automatically mounts the GPU and drivers from your host into the jellyfin docker container.
+
+ ### OpenMAX (Raspberry Pi)
+
+ Hardware acceleration users for Raspberry Pi OpenMAX will need to mount their /dev/vchiq video device inside of the container and their system OpenMax libs by passing the following options when running or creating the container:
+
+ ```
+ --device=/dev/vchiq:/dev/vchiq
+ -v /opt/vc/lib:/opt/vc/lib
+ ```
# changelog
changelogs:
+ - { date: "09.01.20:", desc: "Add Pi OpenMax support." }
- { date: "02.10.19:", desc: "Improve permission fixing for render & dvb devices." }
- { date: "31.07.19:", desc: "Add AMD drivers for vaapi support on x86." }
- { date: "13.06.19:", desc: "Add Intel drivers for vaapi support on x86." }
diff --git a/root/etc/cont-init.d/40-gid-video b/root/etc/cont-init.d/40-gid-video
index 8e0dec3..9ebcb11 100644
--- a/root/etc/cont-init.d/40-gid-video
+++ b/root/etc/cont-init.d/40-gid-video
@@ -1,6 +1,6 @@
#!/usr/bin/with-contenv bash
-FILES=$(find /dev/dri /dev/dvb -type c -print 2>/dev/null)
+FILES=$(find /dev/dri /dev/dvb /dev/vchiq -type c -print 2>/dev/null)
for i in $FILES
do
@@ -24,3 +24,10 @@ done
if [ -n "${FILES}" ] && [ ! -f "/groupadd" ]; then
usermod -a -G root abc
fi
+
+# openmax lib loading
+if [ -e "/opt/vc/lib" ] && [ ! -e "/etc/ld.so.conf.d/00-vmcs.conf" ]; then
+ echo "[jellyfin-init] Pi Libs detected loading"
+ echo "/opt/vc/lib" > "/etc/ld.so.conf.d/00-vmcs.conf"
+ ldconfig
+fi