Thursday 3 October 2024

Automating a Live Stream with Home Assistant Based on Sun Elevation

Up until just recently, I have been semi-manually starting and stopping the live stream.
Before I log off for the evening, I would calculate the number of seconds between that time and dawn then use that to create this command:

sleep <seconds> && sh script.sh

I would monitor the stream throughout the day (for mod actions like bot whack-a-mole and to count as an extra viewer :) )

Just after sunset when the light had dimmed, I would then attach to the session and terminate the script.

I found that sometimes I would either forget to do the first command, run the command from the wrong path, or miscalculate the duration.


With this in mind, I worked on an automation project using Home Assistant to control the live stream based on the sun's position. The goal was to automate the start and stop of the stream depending on the sun's elevation.

The Plan

I wanted the stream to run from just before sunrise to just after sunset. To achieve this, I decided to start the stream when the sun’s elevation reaches -8.0 degrees (during dawn) and stop it again when the sun’s elevation drops to -8.0 degrees after sunset (during dusk). This would mean the stream ran as the sun travelled across the sky, creating a seamless integration between the time of day and the stream.

Step 1: Controlling the Stream Remotely via SSH

The stream is controlled by two scripts on a remote host, which I can access over SSH. These scripts use tmux to ensure the stream continues running even if the connection to the remote host is lost.

  • Start Script: Checks for the existence of a tmux session. If the session doesn’t exist, it creates one. It then sends the command to the session to start the stream script.
  • Stop Script: Sends a command to the tmux session to stop the stream.

I set up Home Assistant to SSH into the host and trigger these scripts.

Step 2: Creating a new switch and adding SSH Commands in Home Assistant

To start with, I had to do the normal key-pair exchange on the home assistant command line, which creates the key file. In order for home assistant to use those keys, you need to copy them to the config folder.

To trigger these scripts, I created two command_line services in Home Assistant's configuration.yaml file:

yaml
command_line:
- switch: name: Pi_Stream command_on: "ssh -q -i /config/id_rsa -o UserKnownHostsFile=/config/known_hosts user@host sh start.sh" command_off: "ssh -q -i /config/id_rsa -o UserKnownHostsFile=/config/known_hosts user@host sh stop.sh"
  • The command_on: command connects to the remote host, runs the start script, which checks for the tmux session, and starts the stream.
  • The command_off: command connects to the remote host, runs the stop script, which sends the terminate script command to the session, halting the stream.

Step 3: Automating Based on Sun Elevation

The next step was to automate the stream using the sun's elevation. Home Assistant's built-in sun integration tracks the position of the sun, which can be used to trigger automations.

I created an automation that stops the stream when the sun's elevation reaches -8.0 degrees after sunset:



yaml
- id: '1727953002868'
alias: Stop Pi Stream description: When its dark triggers: - trigger: numeric_state entity_id: - sun.sun for: hours: 0 minutes: 1 seconds: 0 attribute: elevation below: -8 conditions: [] actions: - action: switch.turn_off metadata: {} data: {} target: entity_id: switch.pi_stream mode: single

Once this automation is triggered, the stream shuts down as dusk deepens. As the stream is currently running at the time of writing, I need to wait to see if this is successful.
Once this is confirmed to be working, I plan to create another automation to start the stream when the sun's elevation rises to -8.0 degrees in the morning.

My next task is to reconfigure how the chat bot works.
Currently, the bot runs from a script, which is running hourly as a cron job.
The plan is to take this out of cron and utilise the SSH function to get home assistant to fire off the script hourly, but only if the stream is running.
This way its not running all the time.

Summary

By integrating Home Assistant with SSH and tmux, I was able to fully automate the control of a video stream based on the sun's position (subject to successful testing in about 4.5 hours according to the MET office). This approach can be adapted to any scenario where a remote script needs to be triggered from Home Assistant. It opens the door to many possibilities, whether for controlling cameras, live streams, or other devices based on environmental factors like light and time of day.

If you're looking to combine the power of Home Assistant's automation with remote scripts, SSH and tmux are excellent tools to ensure your commands run reliably.

Wednesday 2 October 2024

Automating Twitch Announcements Using Cron and Home Assistant

 

Recently, I integrated Twitch's API to automate sending announcements to my channel's chat. The goal was to set up a system where an hourly log, updated by Home Assistant's Sun integration, triggers a script that sends the last log entry as an announcement in my Twitch channel chat.

Setting Up the Twitch API

To begin, I created a Twitch application, which provided me with a Client ID and Client Secret. These are essential for making authenticated requests to the Twitch API. After creating the app, I needed to gain an Access Token with the appropriate scopes that would allow me to post chat messages.

Initially, I ran into an issue with the requested scope. After consulting Twitch's documentation, I learned that the correct scope for managing announcements had changed from channel:manage:announcements to moderator:manage:announcements. However, after further consideration, I decided to use the user:write:chat and user:bot scopes to simplify the integration.

Getting the Access Token

Using the Twitch OAuth flow, I generated an authorisation URL that included the necessary scopes. Once authorised, I exchanged the authorisation code for an access token using a simple curl command.

The access token allowed my script to communicate with Twitch’s API. Additionally, I retrieved my Broadcaster ID and Sender ID—important parameters for sending chat messages.

Automating Announcements

With the access token and IDs in hand, I wrote a bash script that:

  • Reads the latest log entry from a file (SunElevation.txt), which Home Assistant updates hourly.
  • Sends that entry as an announcement to my Twitch chat using Twitch’s chat API.

I configured the script to run hourly via cron, ensuring my channel stays updated with automated messages based on the Sun elevation data collected by Home Assistant.

Overcoming Common Issues

Throughout the setup, I encountered a few key hurdles:

  1. Authorisation Scope Mismatch: Initially, the scope for sending announcements was incorrect, but switching to user:write:chat and user:bot solved the issue.
  2. OAuth Flow and Redirect URI: I manually managed the OAuth flow, copying the authorisation code from the browser and exchanging it for the access token via the command line. Though the process works, I’ll explore automating this step in the future.
  3. Cron Job Automation: The final piece was setting up cron to run the announcement script hourly. With the SunElevation.txt being updated regularly, this ensures the announcements are always in sync with the current state of the sun.

Conclusion

This setup provides a seamless way to automate Twitch announcements based on data from Home Assistant. The ability to send messages via the Twitch API opens up countless possibilities for engaging viewers in a dynamic, automated way. Whether it’s updating viewers on the weather, system statuses, or other key data points, this method can be easily adapted to suit various needs.

Shout out to my good friend, best mod and overall tech wizard Peaeyeennkay, for helping me navigate the quagmire that is development documentation.

Please go check him out over at https://mastodon.social/@PeaEyeEnnKay

Stay tuned as I continue to refine and enhance this setup!

Monday 23 September 2024

Streaming Setup: Integrating FFmpeg Overlays and Audio into a Picam feed

Lately, I’ve been setting up and refining a Raspberry Pi-based streaming setup, focusing on combining a video feed from a Raspberry Pi camera with overlay graphics and audio in real-time using ffmpeg. It’s been quite a journey, filled with trial and error as I worked through various technical challenges.

TL:DR Take me to:
The Twitch

I stumbled upon Restreamer (https://github.com/datarhei/restreamer) which runs in a container.
I deployed this to the Raspberry Pi and set about connecting everything up.

Initial Camera and Overlay Setup

I started by streaming a camera feed using rpicam-vid on a Raspberry Pi. The initial command streamed video at 1080p and 30 fps to a TCP connection:

rpicam-vid -t 0 --inline --listen -o tcp://0.0.0.0:8554 --level 4.2 --framerate 30 --width 1920 --height 1080 --denoise cdn_off -b 8000000

I was suitably able to add this to the restreamer software, add a secondary audio stream, connect it to a Twitch account and stream live.
Unfortunately the software has no mechanism for adding overlays to the resultant stream.

With this in mind I created another ffmpeg command that takes the TCP stream from the stream from the Pi, overlaid an image and added the contents of a text file mentioned above.

ffmpeg -loglevel debug -i tcp://192.168.1.54:8554 -i StreamOverlay.png \ -filter_complex "[0:v][1:v]overlay=0:0,drawtext=textfile='current_ track.txt':x=(w-text_w)/2:y=h-50:fontcolor=green:fontsize=24:box=1:boxcolor=black@0.5:boxborderw=10" -an -c:v libx264 -f mpegts tcp://<ip_address>:8556

It seems the Raspberry Pi 4 doesn't have sufficient resources to encode the camera feed with the overlay. I tried to reduce the incoming camera resolution to 1280 * 720, but this was still insufficient for the restreamer software to handle on the modest hardware. At this point I moved the heavy lifting over to a virtual machine on my home server and this seemed to solve the problem.

ffmpeg -loglevel debug -i tcp://0.0.0.0. -i StreamOverlay.png \-filter_complex "[0:v][1:v]overlay=0:0,drawtext=textfile='current_track.txt' :x=(w-text_w)/2:y=h-50:fontcolor=green:fontsize=24:box=1:boxcolor=black@0.5:boxborderw=10" \   -an -c:v h264 -b:v 8M -g 30 -preset veryfast -tune zerolatency -bufsize 16M -max_delay 500000 \   -x264-params keyint=30:min-keyint=15:scenecut=0 -f mpegts tcp://0.0.0.0:8554?listen

Initially, I encountered stream quality and decoding errors.
After tweaking buffer sizes, bitrate, and keyframe intervals, things began to stabilise.

Integrating Audio

Next, I focused on integrating audio into the video stream. Initially, I used a separate ffmpeg process to stream MP3 files over TCP, but I faced an issue where audio stopped after the first track ended. The ffmpeg process didn’t crash but would stall on subsequent tracks. Here’s the basic script I used:

#!/bin/bash
audio_folder="<folder where music resides>"
output_file="current_track.txt"
while true; do
  for file in "$audio_folder"/*.mp3; do
    echo "Now playing: $(basename "$file")" > "$output_file"
    cp $output_file /home/rob/$output_file
    ffmpeg -re -i "$file" -acodec copy -f mulaw tcp://0.0.0.0:8555?listen
  done
done

After switching to a local setup, with both the video and audio on the same server, I modified the overlay command to iterate through the MP3s in a folder directly.

Putting it all together


I moved the individual commands to their respective scripts and added some logic that would restart the "service" if it dropped for any reason:

It seems that the restreamer software doesn't like being on the Pi, with this in mind I bypassed that extra software entirely.

That worked, but I still had issues with audio.

#!/bin/bash

# Define the folder containing the audio files
audio_folder="/home/rob/Music"

# Define the text file where the current track info will be written
output_file="current_track.txt"

# Define the playlist file
playlist_file="playlist.txt"

while true; do
    # Generate the playlist file
    rm -f "$playlist_file"
    for file in "$audio_folder"/*.mp3; do
        echo "file '$file'" >> "$playlist_file"
    done

    # Get the first track name to display as "Now playing"
    first_track=$(basename "$(head -n 1 "$playlist_file" | sed "s/file '//g" | sed "s/'//g")")
    echo "Now playing: $first_track" > "$output_file"

    # Run ffmpeg to combine the video, overlay, and audio from the playlist
    echo "Starting ffmpeg overlay with playlist..."
    ffmpeg -loglevel level+debug -i tcp://192.168.1.54:8554 \
            -i StreamOverlay.png \
            -f concat -safe 0 -i "$playlist_file" \
            -filter_complex "[0:v][1:v]overlay=0:0,drawtext=textfile='$output_file':x=(w-text_w)/2:y=h-50:fontcolor=green:fontsize=24:box=1:boxcolor=black@0.5:boxborderw=10" \
            -c:a aac -ac 2 -b:a 128k \
            -c:v h264 -b:v 6000k -g 60 -preset veryfast -tune zerolatency \
            -bufsize 12M -max_delay 500000 -x264-params keyint=60:scenecut=0 \
            -f flv rtmp://live.twitch.tv/app/live_<stream_key>

    # Check if ffmpeg encountered an error and restart
    if [ $? -ne 0 ]; then
        echo "ffmpeg stopped. Restarting in 5 seconds..."
        sleep 5
    fi
done

This seemed to work fine for a time but then the audio would stop. I am yet to find the time to investigate.

Tidying up


I had the various scripts running in separate tmux sessions for my visibility. To make this easier, I made a script that creates the sessions and runs the respective script

#!/bin/bash

# Define script paths
camera_script="/path/to/your/camera_script.sh"
overlay_script="/path/to/your/overlay_script.sh"

# Define session names
overlay_script_session="Overlay"
camera_session="Camera"

# Start tmux session for Camera
tmux new-session -d -s "$camera_session" "bash $camera_script"
echo "Started tmux session: $camera_session"

# Start tmux session for Overlay
tmux new-session -d -s "$overlay_script_session" "bash $overlay_script"
echo "Started tmux session: $overlay_script_session"

This works great if I have to restart everything.
I'm also looking in to a way of automating the start and stop of streams based on the sunrise and sunset in my location, but for the time being I am just calculating the time in seconds between now and sunrise and adding that to the command in one line:

sleep <seconds> && sh script.sh

Timelapse Creation

During all of this, I also worked on creating a timelapse from the resultant 13-hour  off video. Using ffmpeg, I generated a 1-minute timelapse that was successfully uploaded to YouTube. The command was straightforward and effective:
ffmpeg -i input_video.mp4 -filter:v "setpts=PTS/802" -an -r 30 output_timelapse.mp4
This command sped up the video by a factor of 802 times by adjusting the presentation timestamps, producing a smooth timelapse.

Final Thoughts

This project has been a learning experience in stream handling, ffmpeg configurations, and overcoming hardware limitations. I’ve moved most of the intensive processing off the Raspberry Pi to ensure smoother streaming and a better viewer experience.
Man, formatting ffmpeg commands correctly, especially for taking multiple sources and overlaying them in the way I wanted.
While there are always more optimisations to be made, especially regarding audio stability, the progress has been rewarding. 

You can find:
The Twitch