After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 762910 - rtspsrc: Add special support for ONVIF servers (custom headers, timestamp extraction, ...)
rtspsrc: Add special support for ONVIF servers (custom headers, timestamp ext...
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gst-plugins-good
git master
Other Linux
: Normal enhancement
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2016-03-01 09:16 UTC by Nicola
Modified: 2018-11-03 15:08 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
extract and apply onvif timestamp (2.75 KB, patch)
2016-03-01 09:16 UTC, Nicola
needs-work Details | Review

Description Nicola 2016-03-01 09:16:56 UTC
Created attachment 322727 [details] [review]
extract and apply onvif timestamp

extract absolute timestamp from onvif rtp extension and apply to rtp buffers,

onvif ntp timestamp are converted to unix timestamp, the output is something like this

/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = chain   ******* (fakesink0:sink) (4434 bytes, dts: 404561:47:11.657000000, pts: 404561:47:11.657000000, duration: none, offset: -1, offset_end: -1, flags: 00004000 tag-memory ) 0x7fa7ac0132d0
/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = chain   ******* (fakesink0:sink) (4454 bytes, dts: 404561:47:11.993000000, pts: 404561:47:11.993000000, duration: none, offset: -1, offset_end: -1, flags: 00004000 tag-memory ) 0x7fa7ac0134f0
Comment 1 Sebastian Dröge (slomo) 2016-03-04 07:12:08 UTC
Review of attachment 322727 [details] [review]:

This shouldn't be the default, it will break playback unless a) basetime of the pipeline is 0 and b) the pipeline uses a clock the returns UNIX time (e.g. the real-time system clock).

::: gst/onvif/gstrtponvifparse.c
@@ +122,3 @@
+  self->last_onvif_timestamp =
+      timestamp_seconds * GST_SECOND + timestamp_nseconds * GST_NSECOND -
+      (2208988800LL * GST_SECOND);

Question is also: why convert to UNIX time? You don't know what your pipeline clock is using, it might very well use NTP times.

@@ +148,3 @@
+    }
+    if (GST_CLOCK_TIME_IS_VALID (buf->pts)) {
+      buf->pts = self->last_onvif_timestamp;

GST_BUFFER_PTS (buf) = ...
Comment 2 Nicola 2016-03-04 08:18:18 UTC
(In reply to Sebastian Dröge (slomo) from comment #1)
> Review of attachment 322727 [details] [review] [review]:

thanks for the review

> 
> This shouldn't be the default, it will break playback unless a) basetime of
> the pipeline is 0 and b) the pipeline uses a clock the returns UNIX time
> (e.g. the real-time system clock).
> 

is a property such as apply-onvif-timestamp as enum:

0 - no
1 - ntp time
2 - unix time

good enough?

the first option will preserve the timestamp generated in rtspsrc

> ::: gst/onvif/gstrtponvifparse.c
> @@ +122,3 @@
> +  self->last_onvif_timestamp =
> +      timestamp_seconds * GST_SECOND + timestamp_nseconds * GST_NSECOND -
> +      (2208988800LL * GST_SECOND);
> 

I think this could be refactored this way

ntptime = GST_READ_UINT64_BE (data);
unixtime = ntptime - (G_GUINT64_CONSTANT (2208988800) << 32);
self->last_onvif_timestamp = gst_util_uint64_scale (unixtime, GST_SECOND, (G_GINT64_CONSTANT (1) << 32));

the result is the same but this is the way gstreamer already use in other elements

> Question is also: why convert to UNIX time? 

in my use case I need to extract a recording, having the absolute time for each buffer allow to accept/drop buffers as needed, unix time is convenient to me since I can simply build a datetime object from it

>You don't know what your
> pipeline clock is using, it might very well use NTP times.
> 
> @@ +148,3 @@
> +    }
> +    if (GST_CLOCK_TIME_IS_VALID (buf->pts)) {
> +      buf->pts = self->last_onvif_timestamp;
> 
> GST_BUFFER_PTS (buf) = ...

ok
Comment 3 Sebastian Dröge (slomo) 2016-03-05 08:09:42 UTC
Do we have any information from which clock those timestamps are actually generated? A property seems acceptable, but just a GstMeta might be better as we have no way to do anything meaningful with those timestamps out of the box.
Comment 4 Nicola 2016-03-05 18:50:29 UTC
(In reply to Sebastian Dröge (slomo) from comment #3)
> Do we have any information from which clock those timestamps are actually
> generated? A property seems acceptable, but just a GstMeta might be better
> as we have no way to do anything meaningful with those timestamps out of the
> box.

from onvif streaming spec, 

http://www.onvif.org/specs/stream/ONVIF-Streaming-Spec-v230.pdf

6.2.1

"""
The NTP timestamps in the RTP extension header shall increase monotonically over successive packets within a single RTP stream. They should correspond to wallclock time as measured at the original transmitter of the stream, adjusted if necessary to preserve monotonicity
"""

please note that onvif replay is broken anyway using gstreamer, there is a single rtsp url that serve non contigous recording, for example on my test camera I have a recording on february 17 and another one on march 3, both are accessible using this url

rtsp://192.168.10.144:554/ProfileG/Recording-1/recording/play.smp 

a pipeline like rtspsrc ! fakesink will error out at the end of the first recording, with these logs (there is a huge timestamp gap)

0:04:35.046476045  1878 0x7fdc98030370 WARN                 rtspsrc gstrtspsrc.c:3140:on_timeout:<rtspsrc0> source 77e5a2cc, stream 77e5a2cc in session 1 timed out
0:04:35.271076208  1878 0x7fdc98030400 WARN                 rtspsrc gstrtspsrc.c:3140:on_timeout:<rtspsrc0> source 9b0c26e8, stream 9b0c26e8 in session 0 timed out
/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = event   ******* (fakesink0:sink) E (type: eos (28174), ) 0x7fdc80001920
0:04:35.271238280  1878 0x7fdc98030630 INFO                    task gsttask.c:315:gst_task_func:<rtpjitterbuffer0:src> Task going to paused


using the patch here at least the stream play until the end with no error,

the point is that onvifparse is a specialized element and it is not autoplugged, if you put it in your pipeline is beacuse you are dealing with an onvif camera and so you are interested to onvif timestamps 

For info, I provided another patch that allow to send the custom headers needed for onvif playback using rtspsrc here:

https://bugzilla.gnome.org/show_bug.cgi?id=762884
Comment 5 Sebastian Dröge (slomo) 2016-03-06 07:59:17 UTC
Well, we should find a solution for autoplugging onvifparse and making these streams work out of the box :)

So basically this here provides the same guarantees as the NTP time in RTCP packets. Maybe it can be used in similar ways inside rtpbin then?


How is it solving playback of these streams? You say they have a huge timestamp gap, what exactly does that mean? A timestamp gap in the RTP timestamps (and not a wraparound)? There's code for handling that in rtpbin/rtpsource somewhere

The on_timeout lines you pasted there sound more like a problem with the server not responding to RTCP anymore. But would have to be tracked down properly by reading the logs


In any case, I think there's another bug here independent of your patch and we should generally handle these streams somehow automatically
Comment 6 Nicola 2016-03-06 13:13:58 UTC
(In reply to Sebastian Dröge (slomo) from comment #5)
> Well, we should find a solution for autoplugging onvifparse and making these
> streams work out of the box :)

seems not easy, probably non standard headers for rtspsrc requests are needed, see below

> 
> So basically this here provides the same guarantees as the NTP time in RTCP
> packets. Maybe it can be used in similar ways inside rtpbin then?
> 

yes, other relevant notes from onvif replay specs:

- Clients should use a TCP-based transport for replay, in order to achieve reliable delivery of media packets
- The server MAY elect not to send RTCP packets during replay. In typical usage RTCP packets are not required, because usually a reliable transport will be used, and because absolute time information is sent within the stream, making the timing information in RTCP sender reports redundant.


> 
> How is it solving playback of these streams? You say they have a huge
> timestamp gap, what exactly does that mean? A timestamp gap in the RTP
> timestamps (and not a wraparound)? There's code for handling that in
> rtpbin/rtpsource somewhere
> 
> The on_timeout lines you pasted there sound more like a problem with the
> server not responding to RTCP anymore. But would have to be tracked down
> properly by reading the logs


there are 2 issue here:

1) you have to set "Rate-Control: no" header when rtspsrc send play request. If this header is omitted the camera send the recording syncronized on timestamp and so when the first recording end it will not send anything until the gap to the next start is filled and so rtspsrc will exit for timeout if this gap is big enough
2) if you set "Rate-Control: no" the camera send the recording to the max speed and gstreamer can sync on the clock, however this seems problematic too (at least over tcp), after the huge gap this is what happen:

Pushing buffer 6005, dts 99:99:99.999999999, pts 0:04:08.745810372
Pushing buffer 6006, dts 99:99:99.999999999, pts 0:00:00.087810372
....
Pushing buffer 6213, dts 99:99:99.999999999, pts 0:00:00.087810372

full logs here:

http://195.250.34.59/temp/log.txt.bz2 

> 
> 
> In any case, I think there's another bug here independent of your patch and
> we should generally handle these streams somehow automatically

let me know what you think is needed for upstream inclusion, however I'm on deadline and my approach seems fine so I don't think I'll have enough time to dig into rtpbin, at least now
Comment 7 Sebastian Dröge (slomo) 2016-03-07 10:15:17 UTC
(In reply to Nicola from comment #6)
> (In reply to Sebastian Dröge (slomo) from comment #5)
> > Well, we should find a solution for autoplugging onvifparse and making these
> > streams work out of the box :)
> 
> seems not easy, probably non standard headers for rtspsrc requests are
> needed, see below

Can we detect if there is a onvif server? Does it tell us anything in the DESCRIBE or SETUP replies that can be used?


> > So basically this here provides the same guarantees as the NTP time in RTCP
> > packets. Maybe it can be used in similar ways inside rtpbin then?
> > 
> 
> yes, other relevant notes from onvif replay specs:
> 
> - Clients should use a TCP-based transport for replay, in order to achieve
> reliable delivery of media packets
> - The server MAY elect not to send RTCP packets during replay. In typical
> usage RTCP packets are not required, because usually a reliable transport
> will be used, and because absolute time information is sent within the
> stream, making the timing information in RTCP sender reports redundant.

We don't require RTCP so that should be fine then :) But it also hints that we should use the onvif time information instead of the RTCP SRs.


> > How is it solving playback of these streams? You say they have a huge
> > timestamp gap, what exactly does that mean? A timestamp gap in the RTP
> > timestamps (and not a wraparound)? There's code for handling that in
> > rtpbin/rtpsource somewhere
> > 
> > The on_timeout lines you pasted there sound more like a problem with the
> > server not responding to RTCP anymore. But would have to be tracked down
> > properly by reading the logs
> 
> 
> there are 2 issue here:
> 
> 1) you have to set "Rate-Control: no" header when rtspsrc send play request.
> If this header is omitted the camera send the recording syncronized on
> timestamp and so when the first recording end it will not send anything
> until the gap to the next start is filled and so rtspsrc will exit for
> timeout if this gap is big enough
> 2) if you set "Rate-Control: no" the camera send the recording to the max
> speed and gstreamer can sync on the clock, however this seems problematic
> too (at least over tcp), after the huge gap this is what happen:

2) means the server sends data as fast as possible and not in real-time? That might break some assumptions in various places, where we assume RTP data is sent in real-time. But let's see that once the other parts are fixed.


In any case, I think we should detect onvif servers in rtspsrc and then do special things for that.
Comment 8 Nicola 2016-03-07 10:46:43 UTC
(In reply to Sebastian Dröge (slomo) from comment #7)
> (In reply to Nicola from comment #6)
> > (In reply to Sebastian Dröge (slomo) from comment #5)
> > > Well, we should find a solution for autoplugging onvifparse and making these
> > > streams work out of the box :)
> > 
> > seems not easy, probably non standard headers for rtspsrc requests are
> > needed, see below
> 
> Can we detect if there is a onvif server? Does it tell us anything in the
> DESCRIBE or SETUP replies that can be used?

maybe we can look for a=x-onvif-track in describe response, an onvif camera should set this line only when we request a recording

however there are several use case:

1) replay a recording from the start, in this case, if no special headers are sent, the server (camera) send the stream in real time and all is fine (but onvif timestamps are not used) until a big gap
2) replay a recording from a start date to an end date, special headers are required for rtsp setup and play requests
3) download a recording (what I need), special headers are needed for rtsp setup and play requests to download to max speed (eventually also the range headers from 2) )

> 
> 
> > > So basically this here provides the same guarantees as the NTP time in RTCP
> > > packets. Maybe it can be used in similar ways inside rtpbin then?
> > > 
> > 
> > yes, other relevant notes from onvif replay specs:
> > 
> > - Clients should use a TCP-based transport for replay, in order to achieve
> > reliable delivery of media packets
> > - The server MAY elect not to send RTCP packets during replay. In typical
> > usage RTCP packets are not required, because usually a reliable transport
> > will be used, and because absolute time information is sent within the
> > stream, making the timing information in RTCP sender reports redundant.
> 
> We don't require RTCP so that should be fine then :) But it also hints that
> we should use the onvif time information instead of the RTCP SRs.
> 
> 
> > > How is it solving playback of these streams? You say they have a huge
> > > timestamp gap, what exactly does that mean? A timestamp gap in the RTP
> > > timestamps (and not a wraparound)? There's code for handling that in
> > > rtpbin/rtpsource somewhere
> > > 
> > > The on_timeout lines you pasted there sound more like a problem with the
> > > server not responding to RTCP anymore. But would have to be tracked down
> > > properly by reading the logs
> > 
> > 
> > there are 2 issue here:
> > 
> > 1) you have to set "Rate-Control: no" header when rtspsrc send play request.
> > If this header is omitted the camera send the recording syncronized on
> > timestamp and so when the first recording end it will not send anything
> > until the gap to the next start is filled and so rtspsrc will exit for
> > timeout if this gap is big enough
> > 2) if you set "Rate-Control: no" the camera send the recording to the max
> > speed and gstreamer can sync on the clock, however this seems problematic
> > too (at least over tcp), after the huge gap this is what happen:
> 
> 2) means the server sends data as fast as possible and not in real-time?
> That might break some assumptions in various places, where we assume RTP
> data is sent in real-time. But let's see that once the other parts are fixed.

yes, the speed is limited by the network/client so if we request to the server to send to max speed we can sync on the clock client side, we should use tcp as onvif specs recommends

> 
> 
> In any case, I think we should detect onvif servers in rtspsrc and then do
> special things for that.

there are several use cases, if we want to handle them automatically we need to at least give to the users the ability so specify start and end date and based on this send an appropriate range header.

My approach to allow to send custom headers (see other patch) and to extract onvif timestamp, if someone manually plug onvifparse, seems less invasive and more flexible, however I could be too focalized on my specific use case
Comment 9 dashesy 2016-12-05 21:21:11 UTC
Instead of auto-detecting onvif servers in rtspsrc we can set a boolean property (like `is_onvif` or `is_onvif_playback`). That way only when playing back (a recording) from onvif source we should treat it specially. 
When onvif source is live/realtime, there needs to be no special treatment. Any special treatment may inadvertently break otherwise working streams.
Comment 10 GStreamer system administrator 2018-11-03 15:08:04 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/260.