Frigate NVR and QuickSync on Intel Alder Lake-N: a guide to using Proxmox, Docker, Frigate

Target Hardware: Intel N100 / N95 / N305 / N355

The “driver gap”

Intel Alder Lake-N chips are famous for QuickSync (QSV)—a dedicated slice of the GPU that processes video so your CPU doesn’t have to. However, when the chips were relatively new, the “stable” version of Debian lacked the modern kernels and drivers needed to talk to them. This guide shows the tests to see if QuickSync is in place followed by steps to get it working.

To simply say I was running Frigate NVR on a Proxmox host (Debian v13 – Trixie) misses the detail that my Frigate application was running in a Docker Container (on Debian v11 – Bullseye) which was running on a Proxmox LXC (the LXC was on Debian v12 – Bookworm)!

When the host driver / kernel did update on Debian Trixie I was faced with the challenge of getting the driver capability adopted by the two older OS. I’d call this a “Russian doll” of dependencies. We must selectively update components without potentially breaking the rest by updating everything. We must update the drivers to bridge the hardware gap.

What about Coral? Not everyone uses it but a Coral AI is USB (or PCI) hardware that adds powerful AI image detection to my relatively low powered ‘Celeron’ hardware or to Raspberry pi systems. I use a USB Coral and this hardware must be ‘passed through’ from the Proxmox host to the LXC container and the Frigate container. We’ll do this at the same time as ‘passing through’ the GPU capabilities.  

NOTE: This guide was written in January 2026 when Proxmox 8 (Bookworm) was the current stable release. At the time most users would be on Proxmox 8.2+ with Kernel 6.8. This was failing to know my mini PC (Minix N355) so I upgraded to the then bleeding-edge version of Proxmox.



0: Frigate configuration

The config ought to have this either as a global setting or part of a camera setting

ffmpeg:
  hwaccel_args: preset-intel-qsv-h264

The Frigate logs will show these info lines if all’s well

libva info: Found init function __vaDriverInit_1_XX
libva info: va_openDriver() returns 0

1: prepare the Proxmox host

We’ll first ensure the Proxmox host can actually see and use the hardware.

1.1 The host kernel

Alder Lake-N requires a modern kernel (6.1+). By moving to Proxmox 9.1.4 (Debian Trixie), we ensure the host can see the new GPU.

1.2 Verify the host hardware by running these commands in the Proxmox Shell:

  • Debian – check your version at a shell/console prompt: cat /etc/debian_version 
  • GPU – check that you have render128: ls -l /dev/dri
    Look for renderD128 with crw-rw-rw- permissions. If it is missing, your kernel is too old.
  • USB – check for the Coral AI: lsusb
    Find your Google Coral or USB cameras (e.g., Bus 002 Device 004). Look for 18d1:9302 or 1a6e:089a
  • VA-API Check: vainfo
    Look for a long list of codecs. Ignore “X terminal” errors.

If vainfo fails, install the non-free drivers:

apt update

apt install -y intel-media-va-driver-non-free firmware-misc-nonfree vainfo intel-gpu-tools

If vainfo still fails ie the hardware is still not detected, you may need to enable IOMMU and GUC in /etc/default/grub. This following sequence edits a line used at boot time:

nano /etc/default/grub 
# find the line GRUB_CMDLINE_LINUX_DEFAULT make it read so:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt i915.enable_guc=3"
update-grub
reboot

2: prepare the LXC Guest for ‘surgery’

Short version of long story: If you don’t already have a Frigate LXC, you can create a ready to go LXC using the Proxmox Helper Scripts website. In my case I went for a script that made an LXC with Docker. To this I installed Portainer CE and then in Portainer I created a ‘stack’ (Docker code snippet below) to install Frigate. 

My LXC was running an older Debian version, so we performed “surgery” to give it Trixie-level drivers. But first we must pass through the hardware to the LXC:

2.1 Passthrough the GPU and USB to the LXC

In the Proxmox host UI note the number (XXX) of the Frigate LXC and edit the LXC configuration file. Go to the Proxmox host shell to add some lines – skip down to my final configuration:

# type in the Proxmox host shell and add these lines (or scroll down to see what's relevant)
nano /etc/pve/lxc/XXX.conf
dev0: /dev/dri/card0,mode=0666
dev1: /dev/dri/renderD128,mode=0666
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/bus/usb dev/bus/usb none bind,optional,create=di
lxc.apparmor.profile: unconfined
lxc.cap.drop: 
lxc.cgroup2.devices.allow: a

2.1b my final LXC configuration (annotated)

Here’s my config for the LXC with annotations (which tend to get moved by Proxmox – you can omit them. Type in the Proxmox host shell

nano /etc/pve/lxc/XXX.conf

# — Standard LXC Resources —

arch: amd64
cores: 4
hostname: frigate-docker
memory: 4096
swap: 512
ostype: debian
onboot: 1
rootfs: local-zfs:subvol-140-disk-0,size=24G
mp0: /mnt/sda-sand2TB,mp=/mnt/sda-sand2TB
features: nesting=1,fuse=1
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=myMAc,ip=myIP/24,type=veth

# — Modern GPU Passthrough (Proxmox 8+) passes the device AND sets permissions (0666) so the LXC user can “see” the GPU.

dev0: /dev/dri/card0,mode=0666
dev1: /dev/dri/renderD128,mode=0666

# — Google Coral USB Passthrough — map the Coral Hardware ID. ‘optional=1’ prevent boot failure if unplugged.

usb0: host=18d1:9302,name=coral,optional=1

# — Overrides allow the LXC to access the USB character device group (189 is the ‘major’ number for USB).

lxc.cgroup2.devices.allow: c 189:* rwm

# This binds the entire USB bus. If the Coral ‘reboots’ (changes ID), this ensures it stays connected.

lxc.mount.entry: /dev/bus/usb dev/bus/usb none bind,optional,create=dir

2.2 updating LXC to Trixie Drivers 

This might not be necessary if the LXC is already on Debian Trixie. Inside the LXC console, we”trick” the system into getting the latest Intel Media drivers:

# Switch repositories to Trixie
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list

# Install specific Alder Lake-N drivers
apt update && apt install -y intel-media-va-driver-non-free firmware-misc-nonfree vainfo intel-gpu-tools
# test it
vainfo
# warning while this repository switch works for drivers, "mixing" Debian versions can occasionally cause dependency issues if you run a full apt upgrade later. If the container works, I don’t upgrade what ain’t broke.

3: Docker & Frigate configuration

Go to the Frigate LXC container – Frigate runs on Docker. I prefer to manage this with Portainer CE. In Portainer I can create a ‘stack’ to install Frigate and when e.g. my storage location changes, it’s straightforward to edit the ‘stack’ compose file.  In the Docker Compose file, we must force Frigate to use the modern iHD driver, as the default i965 driver is incompatible with Alder Lake-N.

3.1 Docker Compose YAML

version: "3.8"
services:
frigate:
container_name: frigate
privileged: true # this may not be necessary
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:stable
shm_size: "1G" # based on camera calculation
cap_add:
- CAP_PERFMON
devices:
- /dev/bus/usb:/dev/bus/usb # USB Coral
- /dev/dri/renderD128:/dev/dri/renderD128 # hwaccel
- /dev/dri:/dev/dri

environment:
- LIBVA_DRIVER_NAME=iHD
volumes:
- /etc/localtime:/etc/localtime:ro
- /mnt/sda-sand2TB/frigate140/config/config.yml:/config/config.yml
- /mnt/sda-sand2TB/frigate140:/media/frigate
- /mnt/sda-sand2TB/frigate140/config:/config
# these were not necessary
# - /usr/lib/x86_64-linux-gnu/dri:/usr/lib/x86_64-linux-gnu/dri:ro
# - /usr/lib/x86_64-linux-gnu/libigdgmm.so.12:/usr/lib/x86_64-linux-gnu/libigdgmm.so.12:ro
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000
network_mode: host
ports:
- 5000:5000
- 8554:8554
- 8555:8555/tcp
- 8555:8555/udp

3.2 Check your Frigate logs:

Somewhere in your Frigate config, either globally or in a camera config, you need a qsv line eg

  left164:
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/left164_sub
input_args: preset-rtsp-restream
hwaccel_args: preset-intel-qsv-h264
roles:
- detect
- record
Look in your Frigate logs

libva info: Found init function __vaDriverInit_1_XX
libva info: va_openDriver() returns 0
If you see returns 0, the driver chain is fully operational.

Leave a Reply

Your email address will not be published. Required fields are marked *