Skip to content

Commit

Permalink
[Feature] Add H265 support with SDP, RTP payloader-deplayloader (#1965)
Browse files Browse the repository at this point in the history
* Add RtpH265Payloader.c and RtpH265Payloader.h

* Add support for H265 in PeerConnection/SessionDescription

* Add support for H265 in PeerConnection/PeerConnection

* Add support for H265 in PeerConnection/Rtp

* Add support for H265 in samples/Common.c

* Add support for H265 in samples/kvsWebRTCClientMaster.c

* rtp, sdp fix, flag removed, clang fixed, windows build fixed, new test added

* test fix

* cleanup

* cleanup

* remove #if 0

* clang

* presentation ts fix

* clang-format fix

* PKG_CONFIG_PATH in kvscommon

* missing bracket

* fix all builds

* ci

* cleanup

* fix windows build, rename h265 defs

* remove duplicate line from h264 and h265

* sample changes

* address comments

* clang-format

* gst sample

* cleanup args

* clang-format

* cleanup

* add sdp tests

* address comments

* address commentas

* set default payload type only once

* address comments

* fix height and width

* sdp change

---------

Co-authored-by: Hongli Wang <hongliwo@amazon.com>
  • Loading branch information
niyatim23 and hongliwo authored May 1, 2024
1 parent 0fb080c commit a28e1e4
Show file tree
Hide file tree
Showing 21 changed files with 856 additions and 79 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -645,7 +645,7 @@ jobs:
shell: powershell
run: |
$env:Path += ';C:\webrtc\open-source\bin;C:\tools\pthreads-w32-2-9-1-release\Pre-built.2\dll\x64;C:\webrtc\build'
& "C:\webrtc\build\tst\webrtc_client_test.exe" --gtest_filter="-SignalingApiFunctionalityTest.receivingIceConfigOffer_SlowClockSkew:SignalingApiFunctionalityTest.iceServerConfigRefreshConnectedAuthExpiration:SignalingApiFunctionalityTest.receivingIceConfigOffer_FastClockSkew:SignalingApiFunctionalityTest.receivingIceConfigOffer_FastClockSkew_VerifyOffsetRemovedWhenClockFixed:DataChannelFunctionalityTest.*:DtlsApiTest.*:IceApiTest.*:IceFunctionalityTest.*:PeerConnectionFunctionalityTest.*:TurnConnectionFunctionalityTest.*:RtpFunctionalityTest.marshallUnmarshallH264Data:RtpFunctionalityTest.packingUnpackingVerifySameH264Frame:RtcpFunctionalityTest.onRtcpPacketCompound:RtcpFunctionalityTest.twcc3"
& "C:\webrtc\build\tst\webrtc_client_test.exe" --gtest_filter="-SignalingApiFunctionalityTest.receivingIceConfigOffer_SlowClockSkew:SignalingApiFunctionalityTest.iceServerConfigRefreshConnectedAuthExpiration:SignalingApiFunctionalityTest.receivingIceConfigOffer_FastClockSkew:SignalingApiFunctionalityTest.receivingIceConfigOffer_FastClockSkew_VerifyOffsetRemovedWhenClockFixed:DataChannelFunctionalityTest.*:DtlsApiTest.*:IceApiTest.*:IceFunctionalityTest.*:PeerConnectionFunctionalityTest.*:TurnConnectionFunctionalityTest.*:RtpFunctionalityTest.marshallUnmarshallH264Data:RtpFunctionalityTest.packingUnpackingVerifySameH264Frame:RtpFunctionalityTest.packingUnpackingVerifySameH265Frame:RtcpFunctionalityTest.onRtcpPacketCompound:RtcpFunctionalityTest.twcc3"
# windows-msvc-mbedtls:
# runs-on: windows-2022
# env:
Expand Down
29 changes: 25 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -273,15 +273,18 @@ After executing `make` you will have sample applications in your `build/samples`
#### Sample: kvsWebrtcClientMaster
This application sends sample H264/Opus frames (path: `/samples/h264SampleFrames` and `/samples/opusSampleFrames`) via WebRTC. It also accepts incoming audio, if enabled in the browser. When checked in the browser, it prints the metadata of the received audio packets in your terminal. To run:
```shell
./samples/kvsWebrtcClientMaster <channelName>
./samples/kvsWebrtcClientMaster <channelName> <storage-option> <audio-codec> <video-codec>
```

To use the **Storage for WebRTC** feature, run the same command as above but with an additional command line arg to enable the feature.

```shell
./samples/kvsWebrtcClientMaster <channelName> 1
./samples/kvsWebrtcClientMaster <channelName> 1 <audio-codec> <video-codec>
```

Allowed audio-codec: opus (default codec if nothing is specified), aac
Allowed video-codec: h264 (default codec if nothing is specified), h265

#### Sample: kvsWebrtcClientMasterGstSample
This application can send media from a GStreamer pipeline using test H264/Opus frames, device `autovideosrc` and `autoaudiosrc` input, or a received RTSP stream. It also will playback incoming audio via an `autoaudiosink`. To run:
```shell
Expand All @@ -292,23 +295,41 @@ Pass the desired media and source type when running the sample. The mediaType ca
./samples/kvsWebrtcClientMasterGstSample <channelName> <mediaType> rtspsrc rtsp://<rtspUri>
```

Using the testsrc with audio and video-codec
```shell
./samples/kvsWebrtcClientMasterGstSample <channelName> <mediaType> <sourceType> <audio-codec> <video-codec>
```

Example:
```shell
./samples/kvsWebrtcClientMasterGstSample <channelName> audio-video testsrc opus h264
```

Allowed audio-codec: opus (default codec if nothing is specified), aac
Allowed video-codec: h264 (default codec if nothing is specified), h265

#### Sample: kvsWebrtcClientViewer
This application accepts sample H264/Opus frames by default. You can use other supported codecs by changing the value for `videoTrack.codec` and `audioTrack.codec` in _Common.c_. By default, this sample only logs the size of the audio and video buffer it receives. To write these frames to a file using GStreamer, use the _kvsWebrtcClientViewerGstSample_ instead.

To run:
```shell
./samples/kvsWebrtcClientViewer <channelName>
./samples/kvsWebrtcClientViewer <channelName> <audio-codec> <video-codec>
```

Allowed audio-codec: opus (default codec if nothing is specified), aac
Allowed video-codec: h264 (default codec if nothing is specified), h265

#### Sample: kvsWebrtcClientViewerGstSample
This application is similar to the kvsWebrtcClientViewer. However, instead of just logging the media it receives, it generates a file using filesink. Make sure that your device has enough space to write the media to a file. You can also customize the receiving logic by modifying the functions in _GstAudioVideoReceiver.c_

To run:
```shell
./samples/kvsWebrtcClientViewerGstSample <channelName> <mediaType>
./samples/kvsWebrtcClientViewerGstSample <channelName> <mediaType> <audio-codec> <video-codec>
```

Allowed audio-codec: opus (default codec if nothing is specified), aac
Allowed video-codec: h264 (default codec if nothing is specified), h265

##### Known issues:
Our GStreamer samples leverage [MatroskaMux](https://gstreamer.freedesktop.org/documentation/matroska/matroskamux.html?gi-language=c) to receive media from its peer and save it to a file. However, MatroskaMux is designed for scenarios where the media's format remains constant throughout streaming. When the media's format changes mid-streaming (referred to as "caps changes"), MatroskaMux encounters limitations, its behavior cannot be predicted and it may be unable to handle these changes, resulting in an error message like:

Expand Down
9 changes: 4 additions & 5 deletions samples/Common.c
Original file line number Diff line number Diff line change
Expand Up @@ -571,13 +571,12 @@ STATUS createSampleStreamingSession(PSampleConfiguration pSampleConfiguration, P
}
#endif

// Declare that we support H264,Profile=42E01F,level-asymmetry-allowed=1,packetization-mode=1 and Opus
CHK_STATUS(addSupportedCodec(pSampleStreamingSession->pPeerConnection, RTC_CODEC_H264_PROFILE_42E01F_LEVEL_ASYMMETRY_ALLOWED_PACKETIZATION_MODE));
CHK_STATUS(addSupportedCodec(pSampleStreamingSession->pPeerConnection, RTC_CODEC_OPUS));
CHK_STATUS(addSupportedCodec(pSampleStreamingSession->pPeerConnection, pSampleConfiguration->videoCodec));
CHK_STATUS(addSupportedCodec(pSampleStreamingSession->pPeerConnection, pSampleConfiguration->audioCodec));

// Add a SendRecv Transceiver of type video
videoTrack.kind = MEDIA_STREAM_TRACK_KIND_VIDEO;
videoTrack.codec = RTC_CODEC_H264_PROFILE_42E01F_LEVEL_ASYMMETRY_ALLOWED_PACKETIZATION_MODE;
videoTrack.codec = pSampleConfiguration->videoCodec;
videoRtpTransceiverInit.direction = RTC_RTP_TRANSCEIVER_DIRECTION_SENDRECV;
videoRtpTransceiverInit.rollingBufferDurationSec = 3;
// Considering 4 Mbps for 720p (which is what our samples use). This is for H.264.
Expand All @@ -593,7 +592,7 @@ STATUS createSampleStreamingSession(PSampleConfiguration pSampleConfiguration, P

// Add a SendRecv Transceiver of type audio
audioTrack.kind = MEDIA_STREAM_TRACK_KIND_AUDIO;
audioTrack.codec = RTC_CODEC_OPUS;
audioTrack.codec = pSampleConfiguration->audioCodec;
audioRtpTransceiverInit.direction = RTC_RTP_TRANSCEIVER_DIRECTION_SENDRECV;
audioRtpTransceiverInit.rollingBufferDurationSec = 3;
// For opus, the bitrate could be between 6 Kbps to 510 Kbps
Expand Down
25 changes: 17 additions & 8 deletions samples/GstAudioVideoReceiver.c
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@

static UINT64 presentationTsIncrement = 0;
static BOOL eos = FALSE;
static RTC_CODEC audioCodec = RTC_CODEC_OPUS;

// This function is a callback for the transceiver for every single video frame it receives
// It writes these frames to a buffer and pushes it to the `appsrcVideo` element of the
Expand Down Expand Up @@ -67,7 +68,13 @@ VOID onGstAudioFrameReady(UINT64 customData, PFrame pFrame)

GST_BUFFER_DTS(buffer) = presentationTsIncrement;
GST_BUFFER_PTS(buffer) = presentationTsIncrement;
GST_BUFFER_DURATION(buffer) = gst_util_uint64_scale(pFrame->size, GST_SECOND, DEFAULT_AUDIO_OPUS_BYTE_RATE);

if (audioCodec == RTC_CODEC_AAC) {
GST_BUFFER_DURATION(buffer) = gst_util_uint64_scale(pFrame->size, GST_SECOND, DEFAULT_AUDIO_AAC_BYTE_RATE);
} else {
GST_BUFFER_DURATION(buffer) = gst_util_uint64_scale(pFrame->size, GST_SECOND, DEFAULT_AUDIO_OPUS_BYTE_RATE);
}
// TODO: check for other codecs once the pipelines are added

if (gst_buffer_fill(buffer, 0, pFrame->frameData, pFrame->size) != pFrame->size) {
DLOGE("Buffer fill did not complete correctly");
Expand Down Expand Up @@ -111,9 +118,11 @@ PVOID receiveGstreamerAudioVideo(PVOID args)
roleType = "Master";
}

CHK_ERR(gst_init_check(NULL, NULL, &error), STATUS_INTERNAL_ERROR, "[KVS %s] GStreamer initialization failed");
audioCodec = pSampleConfiguration->audioCodec;

CHK_ERR(gst_init_check(NULL, NULL, &error), STATUS_INTERNAL_ERROR, "[KVS GStreamer %s] GStreamer initialization failed");

CHK_ERR(pSampleStreamingSession != NULL, STATUS_NULL_ARG, "[KVS %s] Sample streaming session is NULL", roleType);
CHK_ERR(pSampleStreamingSession != NULL, STATUS_NULL_ARG, "[KVS Gstreamer %s] Sample streaming session is NULL", roleType);

// It is advised to modify the pipeline and the caps as per the source of the media. Customers can also modify this pipeline to
// use any other sinks instead of `filesink` like `autovideosink` and `autoaudiosink`. The existing pipelines are not complex enough to
Expand Down Expand Up @@ -168,17 +177,17 @@ PVOID receiveGstreamerAudioVideo(PVOID args)
audioVideoDescription = g_strjoin(" ", videoDescription, audioDescription, NULL);

pipeline = gst_parse_launch(audioVideoDescription, &error);
CHK_ERR(pipeline != NULL, STATUS_INTERNAL_ERROR, "[KVS %s] Pipeline is NULL", roleType);
CHK_ERR(pipeline != NULL, STATUS_INTERNAL_ERROR, "[KVS GStreamer %s] Pipeline is NULL", roleType);

appsrcVideo = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc-video");
CHK_ERR(appsrcVideo != NULL, STATUS_INTERNAL_ERROR, "[KVS %s] Cannot find appsrc video", roleType);
CHK_ERR(appsrcVideo != NULL, STATUS_INTERNAL_ERROR, "[KVS GStreamer %s] Cannot find appsrc video", roleType);
CHK_STATUS(transceiverOnFrame(pSampleStreamingSession->pVideoRtcRtpTransceiver, (UINT64) appsrcVideo, onGstVideoFrameReady));
g_object_set(G_OBJECT(appsrcVideo), "caps", videocaps, NULL);
gst_caps_unref(videocaps);

if (pSampleConfiguration->mediaType == SAMPLE_STREAMING_AUDIO_VIDEO) {
appsrcAudio = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc-audio");
CHK_ERR(appsrcAudio != NULL, STATUS_INTERNAL_ERROR, "[KVS %s] Cannot find appsrc audio", roleType);
CHK_ERR(appsrcAudio != NULL, STATUS_INTERNAL_ERROR, "[KVS GStreamer %s] Cannot find appsrc audio", roleType);
CHK_STATUS(transceiverOnFrame(pSampleStreamingSession->pAudioRtcRtpTransceiver, (UINT64) appsrcAudio, onGstAudioFrameReady));
g_object_set(G_OBJECT(appsrcAudio), "caps", audiocaps, NULL);
gst_caps_unref(audiocaps);
Expand All @@ -191,7 +200,7 @@ PVOID receiveGstreamerAudioVideo(PVOID args)

/* block until error or EOS */
bus = gst_element_get_bus(pipeline);
CHK_ERR(bus != NULL, STATUS_INTERNAL_ERROR, "[KVS %s] Bus is NULL", roleType);
CHK_ERR(bus != NULL, STATUS_INTERNAL_ERROR, "[KVS GStreamer %s] Bus is NULL", roleType);
msg = gst_bus_timed_pop_filtered(bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS);

/* Free resources */
Expand Down Expand Up @@ -226,7 +235,7 @@ PVOID receiveGstreamerAudioVideo(PVOID args)

CleanUp:
if (error != NULL) {
DLOGE("[KVS %s] %s", roleType, error->message);
DLOGE("[KVS GStreamer %s] %s", roleType, error->message);
g_clear_error(&error);
}

Expand Down
11 changes: 11 additions & 0 deletions samples/Samples.h
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ extern "C" {
#include <com/amazonaws/kinesis/video/webrtcclient/Include.h>

#define NUMBER_OF_H264_FRAME_FILES 1500
#define NUMBER_OF_H265_FRAME_FILES 1500
#define NUMBER_OF_OPUS_FRAME_FILES 618
#define DEFAULT_FPS_VALUE 25
#define DEFAULT_VIDEO_HEIGHT_PIXELS 720
Expand All @@ -25,6 +26,14 @@ extern "C" {
#define DEFAULT_AUDIO_AAC_BITS_PER_SAMPLE 16
#define DEFAULT_MAX_CONCURRENT_STREAMING_SESSION 10

#define AUDIO_CODEC_NAME_ALAW "alaw"
#define AUDIO_CODEC_NAME_MULAW "mulaw"
#define AUDIO_CODEC_NAME_OPUS "opus"
#define AUDIO_CODEC_NAME_AAC "aac"
#define VIDEO_CODEC_NAME_H264 "h264"
#define VIDEO_CODEC_NAME_H265 "h265"
#define VIDEO_CODEC_NAME_VP8 "vp8"

#define SAMPLE_MASTER_CLIENT_ID "ProducerMaster"
#define SAMPLE_VIEWER_CLIENT_ID "ConsumerViewer"
#define SAMPLE_CHANNEL_NAME (PCHAR) "ScaryTestChannel"
Expand Down Expand Up @@ -123,6 +132,8 @@ typedef struct {
PCHAR pCaCertPath;
PAwsCredentialProvider pCredentialProvider;
SIGNALING_CLIENT_HANDLE signalingClientHandle;
RTC_CODEC audioCodec;
RTC_CODEC videoCodec;
PBYTE pAudioFrameBuffer;
UINT32 audioBufferSize;
PBYTE pVideoFrameBuffer;
Expand Down
51 changes: 45 additions & 6 deletions samples/kvsWebRTCClientMaster.c
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ INT32 main(INT32 argc, CHAR* argv[])
PCHAR pChannelName;
SignalingClientMetrics signalingClientMetrics;
signalingClientMetrics.version = SIGNALING_CLIENT_METRICS_CURRENT_VERSION;
RTC_CODEC audioCodec = RTC_CODEC_OPUS;
RTC_CODEC videoCodec = RTC_CODEC_H264_PROFILE_42E01F_LEVEL_ASYMMETRY_ALLOWED_PACKETIZATION_MODE;

SET_INSTRUMENTED_ALLOCATORS();
UINT32 logLevel = setLogLevel();
Expand All @@ -27,10 +29,28 @@ INT32 main(INT32 argc, CHAR* argv[])

CHK_STATUS(createSampleConfiguration(pChannelName, SIGNALING_CHANNEL_ROLE_TYPE_MASTER, TRUE, TRUE, logLevel, &pSampleConfiguration));

if (argc > 3) {
if (!STRCMP(argv[3], AUDIO_CODEC_NAME_AAC)) {
audioCodec = RTC_CODEC_AAC;
} else {
DLOGI("[KVS Master] Defaulting to opus as the specified codec's sample frames may not be available");
}
}

if (argc > 4) {
if (!STRCMP(argv[4], VIDEO_CODEC_NAME_H265)) {
videoCodec = RTC_CODEC_H265;
} else {
DLOGI("[KVS Master] Defaulting to H264 as the specified codec's sample frames may not be available");
}
}

// Set the audio and video handlers
pSampleConfiguration->audioSource = sendAudioPackets;
pSampleConfiguration->videoSource = sendVideoPackets;
pSampleConfiguration->receiveAudioVideoSource = sampleReceiveAudioVideoFrame;
pSampleConfiguration->audioCodec = audioCodec;
pSampleConfiguration->videoCodec = videoCodec;

if (argc > 2 && STRNCMP(argv[2], "1", 2) == 0) {
pSampleConfiguration->channelInfo.useMediaStorage = TRUE;
Expand All @@ -44,11 +64,21 @@ INT32 main(INT32 argc, CHAR* argv[])

// Check if the samples are present

CHK_STATUS(readFrameFromDisk(NULL, &frameSize, "./h264SampleFrames/frame-0001.h264"));
DLOGI("[KVS Master] Checked sample video frame availability....available");
if (videoCodec == RTC_CODEC_H264_PROFILE_42E01F_LEVEL_ASYMMETRY_ALLOWED_PACKETIZATION_MODE) {
CHK_STATUS(readFrameFromDisk(NULL, &frameSize, "./h264SampleFrames/frame-0001.h264"));
DLOGI("[KVS Master] Checked H264 sample video frame availability....available");
} else if (videoCodec == RTC_CODEC_H265) {
CHK_STATUS(readFrameFromDisk(NULL, &frameSize, "./h265SampleFrames/frame-0001.h265"));
DLOGI("[KVS Master] Checked H265 sample video frame availability....available");
}

CHK_STATUS(readFrameFromDisk(NULL, &frameSize, "./opusSampleFrames/sample-001.opus"));
DLOGI("[KVS Master] Checked sample audio frame availability....available");
if (audioCodec == RTC_CODEC_OPUS) {
CHK_STATUS(readFrameFromDisk(NULL, &frameSize, "./opusSampleFrames/sample-001.opus"));
DLOGI("[KVS Master] Checked Opus sample audio frame availability....available");
} else if (audioCodec == RTC_CODEC_AAC) {
CHK_STATUS(readFrameFromDisk(NULL, &frameSize, "./aacSampleFrames/sample-001.aac"));
DLOGI("[KVS Master] Checked AAC sample audio frame availability....available");
}

// Initialize KVS WebRTC. This must be done before anything else, and must only be done once.
CHK_STATUS(initKvsWebRtc());
Expand Down Expand Up @@ -146,7 +176,11 @@ PVOID sendVideoPackets(PVOID args)

while (!ATOMIC_LOAD_BOOL(&pSampleConfiguration->appTerminateFlag)) {
fileIndex = fileIndex % NUMBER_OF_H264_FRAME_FILES + 1;
SNPRINTF(filePath, MAX_PATH_LEN, "./h264SampleFrames/frame-%04d.h264", fileIndex);
if (pSampleConfiguration->videoCodec == RTC_CODEC_H264_PROFILE_42E01F_LEVEL_ASYMMETRY_ALLOWED_PACKETIZATION_MODE) {
SNPRINTF(filePath, MAX_PATH_LEN, "./h264SampleFrames/frame-%04d.h264", fileIndex);
} else if (pSampleConfiguration->videoCodec == RTC_CODEC_H265) {
SNPRINTF(filePath, MAX_PATH_LEN, "./h265SampleFrames/frame-%04d.h265", fileIndex);
}

CHK_STATUS(readFrameFromDisk(NULL, &frameSize, filePath));

Expand Down Expand Up @@ -218,7 +252,12 @@ PVOID sendAudioPackets(PVOID args)

while (!ATOMIC_LOAD_BOOL(&pSampleConfiguration->appTerminateFlag)) {
fileIndex = fileIndex % NUMBER_OF_OPUS_FRAME_FILES + 1;
SNPRINTF(filePath, MAX_PATH_LEN, "./opusSampleFrames/sample-%03d.opus", fileIndex);

if (pSampleConfiguration->audioCodec == RTC_CODEC_AAC) {
SNPRINTF(filePath, MAX_PATH_LEN, "./aacSampleFrames/sample-%03d.aac", fileIndex);
} else if (pSampleConfiguration->audioCodec == RTC_CODEC_OPUS) {
SNPRINTF(filePath, MAX_PATH_LEN, "./opusSampleFrames/sample-%03d.opus", fileIndex);
}

CHK_STATUS(readFrameFromDisk(NULL, &frameSize, filePath));

Expand Down
Loading

0 comments on commit a28e1e4

Please sign in to comment.