GNOME Bugzilla – Bug 791838
netsim: fix misleading packet delay log
Last modified: 2018-01-24 00:27:16 UTC
>netsim gstnetsim.c:371:gst_net_sim_delay_buffer:<netsim0>[00m Delaying packet by 22282955ms >netsim gstnetsim.c:371:gst_net_sim_delay_buffer:<netsim0>[00m Delaying packet by 56ms >netsim gstnetsim.c:371:gst_net_sim_delay_buffer:<netsim0>[00m Delaying packet by 0ms >netsim gstnetsim.c:371:gst_net_sim_delay_buffer:<netsim0>[00m Delaying packet by 0ms >netsim gstnetsim.c:371:gst_net_sim_delay_buffer:<netsim0>[00m Delaying packet by 8ms >netsim gstnetsim.c:371:gst_net_sim_delay_buffer:<netsim0>[00m Delaying packet by 0ms >netsim gstnetsim.c:371:gst_net_sim_delay_buffer:<netsim0>[00m Delaying packet by 0ms >netsim gstnetsim.c:371:gst_net_sim_delay_buffer:<netsim0>[00m Delaying packet by 0ms packet delay time shall be calculated by ready_time minus current time
Created attachment 365833 [details] [review] fix misleading packet delay log
Comment on attachment 365833 [details] [review] fix misleading packet delay log >+ now_time = g_get_monotonic_time (); >+ ready_time = now_time + delay * 1000; >+ GST_DEBUG_OBJECT (netsim, "Delaying packet by %ldms", >+ (ready_time - now_time) / 1000); Isn't this the same as writing ..("Delaying packet by %ldms", delay) ?
If packet is not allowed to reorder(allow_reordering set flase) and this packet's readytime is earlier than last packet's readytime, then current packet's readytime will be adjusted, not using the @delay.
Review of attachment 365833 [details] [review]: ::: gst/netsim/gstnetsim.c @@ +368,2 @@ if (!netsim->allow_reordering && ready_time < netsim->last_ready_time) ready_time = netsim->last_ready_time + 1; Tim, here we may have modified ready_time in case it was before the last_ready_time.
Though, while testing it, I have a hard time to see the correlation between max-kbps and the delay, which is weird.
yes, as my understanding after checking netsim implementation, a) delay is based on "min-delay"/"max-delay"/"delay-distribution"/"delay-probability"/"allow-reordering", and it seems simulating packet random delay. b ) throughput is based on "max-kbps "/"max-bucket-size" seems a token bucket is used for throughput sim, and buffers are dropped if no token is available. is it valuable not to drop if token not available?
and also do we need to handle GST_EVENT_FLUSH_START/GST_EVENT_FLUSH_STOP event? so that we remove all current attached delay g_sources.
These questions seems all valid, I'm not very familiar with this "bucket" approach. We could also question the use of a g_source in there in general. But assuming the goal of this element is to introduce jitter, the GSource probably provide a good deal of randomness naturally.
Pushed this, thanks for the patch and sorry for the nonsensical review, I have no idea what I was thinking! commit 3f87a9dd7f629f02ad6c76ee7bdd213ac5409322 Author: Jun Xie <jun.xie@samsung.com> Date: Thu Dec 21 18:33:49 2017 +0800 netsim: fix misleading packet delay log packet delay time shall be calculated by ready_time minus current time https://bugzilla.gnome.org/show_bug.cgi?id=791838 --- Regarding the other questions: 1) FLUSH_START,STOP: yes, we should probably clear any pending delayed packets/sources on FLUSH_STOP. Feel free to open a separate bug for that and/or make a patch for that. 2) Not dropping packets when max-kbps is maxed out: I'm guessing the current behaviour is to simulate a UDP-like scenario. I'd say let's open another bug for this if this is something you want to pursue.
Comment on attachment 365833 [details] [review] fix misleading packet delay log Committed with minor change (use "%" G_GUINT64_FORMAT instead of "%ld").