Lately, I’ve been setting up and refining a Raspberry Pi-based streaming setup,
focusing on combining a video feed from a Raspberry Pi camera with overlay graphics and audio in real-time using ffmpeg. It’s been quite a journey,
filled with trial and error as I worked through various technical challenges.
TL:DR Take me to:
The Twitch
The timelapse
I stumbled upon Restreamer (https://github.com/datarhei/restreamer) which runs in a container.
I deployed this to the Raspberry Pi and set about connecting everything up.
Initial Camera and Overlay Setup
I started by streaming a camera feed using
rpicam-vid
on a Raspberry Pi. The initial command streamed
video at 1080p and 30 fps to a TCP connection:
rpicam-vid -t 0 --inline --listen -o tcp://0.0.0.0:8554 --level 4.2 --framerate 30 --width 1920 --height 1080 --denoise cdn_off -b 8000000
I was suitably able to add this to the restreamer software, add a secondary audio stream, connect it to a Twitch
account and stream live.
Unfortunately the software has no mechanism for adding overlays to the resultant stream.
With this in mind I created another ffmpeg command that takes the TCP stream from the stream
from the Pi, overlaid an image and added the contents of a text file mentioned above.
ffmpeg -loglevel debug -i tcp://192.168.1.54:8554 -i StreamOverlay.png \ -filter_complex "[0:v][1:v]overlay=0:0,drawtext=textfile='current_
track.txt':x=(w-text_w)/2:y=h-50:fontcolor=green:fontsize=24:box=1:boxcolor=black@0.5:boxborderw=10" -an -c:v libx264 -f mpegts tcp://<ip_address>:8556
It seems the Raspberry Pi 4 doesn't have sufficient resources to encode the camera feed with the overlay. I tried to reduce the
incoming camera resolution to 1280 * 720, but this was still insufficient for the restreamer software to handle on the modest hardware. At this point I moved the
heavy lifting over to a virtual machine on my home server and this seemed to solve the problem.
ffmpeg -loglevel debug -i tcp://0.0.0.0. -i StreamOverlay.png \-filter_complex "[0:v][1:v]overlay=0:0,drawtext=textfile='current_track.txt'
:x=(w-text_w)/2:y=h-50:fontcolor=green:fontsize=24:box=1:boxcolor=black@0.5:boxborderw=10" \ -an -c:v h264 -b:v 8M -g 30 -preset veryfast
-tune zerolatency -bufsize 16M -max_delay 500000 \ -x264-params keyint=30:min-keyint=15:scenecut=0 -f mpegts tcp://0.0.0.0:8554?listen
Initially, I encountered stream quality and decoding errors.
After tweaking buffer sizes, bitrate, and keyframe intervals, things began to stabilise.
Integrating Audio
Next, I focused on integrating audio into the video stream. Initially, I used a separate ffmpeg process to stream MP3 files over TCP, but I faced an issue
where audio stopped after the first track ended. The ffmpeg process didn’t crash but would stall on subsequent tracks. Here’s the basic script I used:
#!/bin/bash
audio_folder="<folder where music resides>"
output_file="current_track.txt"
while true; do
for file in "$audio_folder"/*.mp3; do
echo "Now playing: $(basename "$file")" > "$output_file"
cp $output_file /home/rob/$output_file
ffmpeg -re -i "$file" -acodec copy -f mulaw tcp://0.0.0.0:8555?listen
done
done
After switching to a local setup, with both the video and audio on the same server, I modified the overlay command to iterate through
the MP3s in a folder directly.
Putting it all together
I moved the individual commands to their respective scripts and added some logic that would restart the "service" if it dropped for any reason:
It seems that the restreamer software doesn't like being on the Pi, with this in mind I bypassed that extra software entirely.
That worked, but I still had issues with audio.
#!/bin/bash
# Define the folder containing the audio files
audio_folder="/home/rob/Music"
# Define the text file where the current track info will be written
output_file="current_track.txt"
# Define the playlist file
playlist_file="playlist.txt"
while true; do
# Generate the playlist file
rm -f "$playlist_file"
for file in "$audio_folder"/*.mp3; do
echo "file '$file'" >> "$playlist_file"
done
# Get the first track name to display as "Now playing"
first_track=$(basename "$(head -n 1 "$playlist_file" | sed "s/file '//g" | sed "s/'//g")")
echo "Now playing: $first_track" > "$output_file"
# Run ffmpeg to combine the video, overlay, and audio from the playlist
echo "Starting ffmpeg overlay with playlist..."
ffmpeg -loglevel level+debug -i tcp://192.168.1.54:8554 \
-i StreamOverlay.png \
-f concat -safe 0 -i "$playlist_file" \
-filter_complex "[0:v][1:v]overlay=0:0,drawtext=textfile='$output_file':x=(w-text_w)/2:y=h-50:fontcolor=green:fontsize=24:box=1:boxcolor=black@0.5:boxborderw=10" \
-c:a aac -ac 2 -b:a 128k \
-c:v h264 -b:v 6000k -g 60 -preset veryfast -tune zerolatency \
-bufsize 12M -max_delay 500000 -x264-params keyint=60:scenecut=0 \
-f flv rtmp://live.twitch.tv/app/live_<stream_key>
# Check if ffmpeg encountered an error and restart
if [ $? -ne 0 ]; then
echo "ffmpeg stopped. Restarting in 5 seconds..."
sleep 5
fi
done
This seemed to work fine for a time but then the audio would stop. I am yet to find the time to investigate.
Tidying up
I had the various scripts running in separate tmux sessions for my visibility. To make this easier, I made a script that creates the sessions and runs the
respective script
#!/bin/bash
# Define script paths
camera_script="/path/to/your/camera_script.sh"
overlay_script="/path/to/your/overlay_script.sh"
# Define session names
overlay_script_session="Overlay"
camera_session="Camera"
# Start tmux session for Camera
tmux new-session -d -s "$camera_session" "bash $camera_script"
echo "Started tmux session: $camera_session"
# Start tmux session for Overlay
tmux new-session -d -s "$overlay_script_session" "bash $overlay_script"
echo "Started tmux session: $overlay_script_session"
This works great if I have to restart everything.
I'm also looking in to a way of automating the start and stop of streams based on the sunrise and sunset in my location, but for the time being I am just
calculating the time in seconds between now and sunrise and adding that to the command in one line:
sleep <seconds> && sh script.sh
Timelapse Creation
During all of this, I also worked on creating a timelapse from the resultant 13-hour off video. Using ffmpeg, I generated a 1-minute timelapse that
was successfully uploaded to YouTube. The command was straightforward and effective:
ffmpeg -i input_video.mp4 -filter:v "setpts=PTS/802" -an -r 30 output_timelapse.mp4
This command sped up the video by a factor of 802 times by adjusting the presentation timestamps, producing a smooth timelapse.
Final Thoughts
This project has been a learning experience in stream handling, ffmpeg configurations, and overcoming hardware limitations. I’ve moved most of the intensive
processing off the Raspberry Pi to ensure smoother streaming and a better viewer experience.
Man, formatting ffmpeg commands correctly, especially for taking multiple sources and overlaying them in the way I wanted.
While there are always more optimisations to be made, especially regarding audio stability, the progress has been rewarding.
You can find:
EDIT: I have just confirmed that the audio is in fact working, its doing exactly what its supposed to. The reason It stops playing anything after 1 hour 20 minutes, is because thats the length of the playlist!
ReplyDelete