GNOME Bugzilla – Bug 754260
qtdemux: Cannot play .MOV files from "Polaroid Cube" camera, adpcm pushed out wrongly framed
Last modified: 2018-11-03 15:03:33 UTC
I cannot play the .MOV files produced from the Polaroid Cube camera. The files will not work in Pitivi nor totem/nautilus/sushi. A sample file from this camera can be found at http://hxy.io/FILE0003.MOV (13.2MB) Please let me know if I can provide any other information. Thanks
I just notice this warning: gst-launch-1.0 -v filesrc location=FILE0003.MOV ! qtdemux name=d ! queue ! adpcmdec ! pulsesink WARN audiobasesink gstaudiobasesink.c:1137:gst_audio_base_sink_wait_event:<pulsesink0> error: Sink not negotiated before GAP event.
That warning should've been fixed by c24a1254c9f1f73e23a1471ce9dd08f513618c8b, 8cf8332b91a2caf3b23f711ea1eecd7430a0bfa5 and follow-up commits.
> That warning should've been fixed by > c24a1254c9f1f73e23a1471ce9dd08f513618c8b, > 8cf8332b91a2caf3b23f711ea1eecd7430a0bfa5 and follow-up commits. No, because there isn't actually any GAP event sent here. The message is misleading (fixed in git). What's sent here is the EOS event. What I'm wondering is why output CAPS event is not sent immediately when the subclass calls _set_output_format()? I just found some old commit that delays "for consistency with the video base class", but don't really understand why it's needed.
I think it was to allow more things to happen between set_output_format() and negotiate(). At least for video, you can set further things on the output state.
And audiodecoder (and others) should probably send CAPS events downstream before EOS if they can :)
Created attachment 310387 [details] [review] audiodecoder: ensure we have caps before sending EOS avoids assertions on sinks when EOS is received without any caps being set
This is only part of the problem, the real problem is that qtdemux thinks all samples from the audio track have size=0. The audio is DVI ADPCM and has the following properties: samples per chunk = 1017 channels = 2 samples per frame = 30510 bytes per frame = 1024 And then it does: (samples-per-chunk * channels) / samples-per-frame * bytes-per-frame But samples-per-chunk * channels is much smaller than samples-per-frame, which seems to indicate some issue in the way we are calculating those? Having the bytes-per-frame multiplication before the division should lead to a non-zero result but is it right? Anyone knows how DVI ADPCM should be packed into mov?
Putting the multiplication before leads to 68 as the size of the buffers to be read and that leads to: 0:00:00.080544436 21814 0x16eeb70 WARN adpcmdec adpcmdec.c:288:adpcmdec_decode_ima_block:<adpcmdec0> Synchronisation error 0:00:00.080580846 21814 0x16eeb70 WARN adpcmdec adpcmdec.c:370:adpcmdec_decode_block:<adpcmdec0> Decode of block failed 0:00:00.080600637 21814 0x16eeb70 WARN audiodecoder gstaudiodecoder.c:3139:_gst_audio_decoder_error:<adpcmdec0> error: frame decode failed Anyone understands how DVI ADPCM is stored? :)
Comment on attachment 310387 [details] [review] audiodecoder: ensure we have caps before sending EOS The same change might be needed in videodecoder too. Also there might already be queued up caps that are just not sent yet from gst_audio_decoder_set_output_format(). Those should probably be preferred.
I read this ADPCM format is similar to IMA, which means it's 4bits per sample. http://wiki.multimedia.cx/index.php?title=IMA_ADPCM
Just pushing data a frame at a time (1024 bytes) makes this work.
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/215.