Streaming Setup

Streaming Setup

Live streaming1 can be done independently from 3rd parties by using a streaming media server2 such as Icecast and a source client for encoding and streaming to that server.

I’m using Icecast and oggfwd in combination with ffmpeg as source client to realize live streaming concerts. Although there are many official source clients available, oggfwd seems to be the only solution for JACK and TLS encrypted connections. The mixing and monitoring is done in Ardour. The signal flow can be represented as follows:

graph LR;
    ardour-->ffmpeg;
    ffmpeg-->oggfwd;
    oggfwd-->icecast;
    icecast-->listener;

If you follow this guide to setup your own live streaming I assume configuration according to my Icecast setup and a working audio setup using the Jack Audio Connection Kit (JACK) as mentioned above. Additionally, general knowledge on how to install software and use a terminal in your GNU/Linux distribution is also necessary.

Installation

Get the ffmpeg and oggfwd package by using your distribution’s package manager or compile from sources. You need to have mpv installed in order to play waiting music and wget3 for triggering tasks via Icecast’s Admin Interface. In ArchLinux, ffmpeg and wget are provided in the official repositories and oggfwd in the AUR.

Start live streaming

The following chapters document how to manage the live streaming in detail by describing the steps and providing numbered Bash4 scripts. Create the files using the filenames in the code block header, copy and paste the code, and make the files executable.

Setting up the environment

To simply copy and paste the commands used in this guide, I set the following environment variables5 to my needs. Most of the values depend on the settings applied in the Icecast basic setup. The following script is an example and needs to be adapted:

1-setting-up-env.sh
# ffmpeg options
export CONCERT_TITLE="My streaming concert"
export ARTIST_NAME="FLOSS artist"
# oggfwd and wget options
export HOSTNAME="icecast.example.org"
export PORT="8443"
export SOURCE_PASSWORD="secret-S0URC3-passwd"
export STREAM_MOUNTPOINT="/stream.ogg"
export FALLBACK_MOUNTPOINT="/greeting.ogg"
export ADMIN_USERNAME="admin"
export ADMIN_PASSWORD="@dmin-P4SSW0RD"
# ffmpeg, mpv, and jack_connect options
export AUDIO_FILE="/path/to/waiting-music.ogg"
export AUDIO_BACKEND="jack"
export JACK_CLIENT1="ffmpeg"

I create or open an ardour session to route, control and mix physical inputs and pre-recorded tracks, if any. I setup the session: Level gains, pan tracks and apply effects, but keep in mind to save some resources for encoding. If a recording is desired, I globally activate recording in the transport bar and arm the desired tracks using the audio track controls.

Encoding and forwarding

The output of ardour must be encoded to the Ogg Vorbis container format and provided at stdout to be forwarded by oggfwd. This can be achieved by using ffmpeg’s JACK input device, audio encoding parameters and piping6 the output to oggfwd using the | operator:

2-encoding-forwarding.sh
ffmpeg -f $AUDIO_BACKEND -i $JACK_CLIENT1 \
       -vn \
       -acodec libvorbis \
       -b:a 192k -minrate 192k -maxrate 192k \
       -metadata title="$CONCERT_TITLE" -metadata artist="$ARTIST_NAME" \
       -f ogg -y /dev/stdout \
| oggfwd $HOSTNAME $PORT $SOURCE_PASSWORD $STREAM_MOUNTPOINT

The fmpeg parameters are:

  • -f $AUDIO_BACKEND -i $JACK_CLIENT1: Use $AUDIO_BACKEND and $JACK_CLIENT1 to create a JACK input device called ffmpeg.
  • -vn: Specify that no video data is included (i.e. audio only stream selection).
  • -acodec libvorbis: Use libvorbis audio codec in the audio options.
  • -b:a 192k -minrate 192k -maxrate 192k: Ensure a constant bitrate at 192 kbps for on-the-fly encoding by setting these stream specifiers.
  • -metadata title="$CONCERT_TITLE" -metadata artist="$ARTIST_NAME": Add artist and title metadata for identification in the main options.
  • -f ogg -y /dev/stdout: Force Ogg Vorbis output format and write to stdout.

Connecting JACK clients

Use another terminal to connect ardour’s master outputs to ffmpeg:input_1 and ffmpeg:input_2 within JACK:

3-connecting-jack-clients.sh
jack_connect "ardour:Master/audio_out 1" $JACK_CLIENT1:input_1
jack_connect "ardour:Master/audio_out 2" $JACK_CLIENT1:input_2

Saving time

When setting up those things all by yourself, it might be necessary play waiting music immediately after the source streaming started in order to save time for further preparations. I’m using mpv with the jack audio output driver, auto-connection to the ffmpeg JACK port and loop the file $AUDIO_FILE via a playback control option in another terminal:

4-saving-time.sh
mpv --ao=$AUDIO_BACKEND --jack-port=$JACK_CLIENT1 --loop-file=inf $AUDIO_FILE

Press the m key when starting the concert to mute mpv’s output and press the Play button in ardour’s transport bar to start recording, if desired.

Controlling listeners

Icecast’s Admin Interface offers some functions to control listeners. I’m using wget to trigger those tasks at the terminal. The general syntax is:

wget --quiet \
     --output-document=/dev/null \
     --http-user=$ADMIN_USERNAME \
     --http-password=$ADMIN_PASSWORD \
     https://$HOSTNAME:$PORT/admin/$WGET_URI

The wget parameters are:

  • --http-user=$ADMIN_USERNAME: Use $ADMIN_USERNAME in the HTTP options for the Icecast web administration login
  • --http-password=$ADMIN_PASSWORD: Use $ADMIN_PASSWORD for Icecast web administration login
  • --quiet: Turn off terminal output in the Logging options.
  • --output-document=/dev/null: Write output document /dev/null to discard it in the download options.

The value of $WGET_URI depends on the specific task and is used for the following actions.

Moving listeners

The fallback mountpoint is activated since Icecast is running and listeners might have connected before the actual source stream started. To move client listeners from one mountpoint to another, execute this script:

5-moving-listeners.sh
export WGET_URI="moveclients.xsl mount=$FALLBACK_MOUNTPOINT&destination=$STREAM_MOUNTPOINT"
wget --quiet \
     --output-document=/dev/null \
     --http-user=$ADMIN_USERNAME \
     --http-password=$ADMIN_PASSWORD \
     https://$HOSTNAME:$PORT/admin/$WGET_URI

Killing the fallback

It’s a best practice to kill the fallback streaming source for misbehaving listener clients afterwards:

6-killing-fallback.sh
export WGET_URI="killsource.xsl?mount=$FALLBACK_MOUNTPOINT"
wget --quiet \
     --output-document=/dev/null \
     --http-user=$ADMIN_USERNAME \
    --http-password=$ADMIN_PASSWORD \
    https://$HOSTNAME:$PORT/admin/$WGET_URI

Killing the source streaming

When the concert is over I unmute mpv’s output by pressing the m key again to playback the waiting music and indicate the end of the concert. The client listeners are be moved back to the fallback mount and the source streaming is killed. The script below combines the previous substeps to achieve this:

7-killing-source.sh
export WGET_URI="moveclients.xsl?mount=$STREAM_MOINTPOINT&destination=$FALLBACK_MOUNTPOINT"
wget --quiet \
     --output-document=/dev/null \
     --http-user=$ADMIN_USERNAME \
     --http-password=$ADMIN_PASSWORD \
     https://$HOSTNAME:$PORT/admin/$WGET_URI
export WGET_URI="killsource.xsl?mount=$STREAM_MOUNTPOINT"
wget --quiet \
     --output-document=/dev/null \
     --http-user=$ADMIN_USERNAME \
     --http-password=$ADMIN_PASSWORD \
     https://$HOSTNAME:$PORT/admin/$WGET_URI

Terminating local tools

Lastly ffmpeg, oggfwd and mpv processes are killed:

8-terminate-tools.sh
killall ffmpeg oggfwd mpv

In ardour, I can stop the recording and save the session for later production.


  1. Live streaming in the Wikipedia ↩︎

  2. Media server in the Wikipedia ↩︎

  3. GNU Wget website ↩︎

  4. GNU Bash website ↩︎

  5. Environment variable in the Wikipedia ↩︎

  6. Pipeline (Unix) in the Wikipedia ↩︎

Last updated on