After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 328382 - support single-image files
support single-image files
Status: RESOLVED FIXED
Product: GStreamer
Classification: Platform
Component: gstreamer (core)
0.8.x
Other Linux
: Normal normal
: 0.8.12
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2006-01-24 04:59 UTC by Ronald Bultje
Modified: 2006-01-30 13:45 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
core part (1.37 KB, patch)
2006-01-24 04:59 UTC, Ronald Bultje
none Details | Review
plugins part (19.25 KB, patch)
2006-01-24 04:59 UTC, Ronald Bultje
none Details | Review

Description Ronald Bultje 2006-01-24 04:59:12 UTC
Attached patches add an image tag (core), which can be used for single-image quicktime files. The protocol is also supposed to be used by id3 tags containing covers, but I didn't implement that since I don't care. There's also a related Totem bug elsewhere.

The plugins patch is somewhat larger:
- The alsasink part is to make streams progress if the audio lags the video in the beginning. This makes startup for some webstreams faster.
- The tag part is to fix some ununderstandable stupid code in playbin which makes tag be read from the stream. I've once been told that this is for performance, but it is logically wrong, since some elements, have no pads when tags are being read and are thus not capable of emitting tags to playbin that way.
- The qtdemux part implements GST_TAG_IMAGE for qtdemux.
- The xvimagesink part makes xvimagesink only open a xcontext in >=READY, because else you can never open multiple totem (bvw) instances, even if none of them *use* the xv port.
With those changes and the Totem patch, Totem-mozilla will view the quicktime/single-image files.
Comment 1 Ronald Bultje 2006-01-24 04:59:40 UTC
Created attachment 57978 [details] [review]
core part
Comment 2 Ronald Bultje 2006-01-24 04:59:53 UTC
Created attachment 57979 [details] [review]
plugins part
Comment 3 Jan Schmidt 2006-01-24 09:12:37 UTC
Adding an image type tag sounds like a good idea anyway, but I'm curious why you did it this way instead of having qtdemux create a single image/jpeg stream and outputting the data there?
Comment 4 Ronald Bultje 2006-01-24 13:26:50 UTC
In general, it'd be useful also for ID3 tags, which can contain album covers. See the GST_TAG_IMAGE comment, also.

In this case:
1) several movies contain a single-image stream and a real video stream. We want to distinguish between those. The image is probably a cover or screenshot, whereas the movie is to be played. Those are not to be confused, and we should not allow the user to "switch". So at a playbin level, they need to be handled differently.
2) the easiest way to do that is by doing them differently at the demuxer level, and in this case, I figured that doing so as a tag would be a good idea, since it basically is stream metadata.
3) this does not keep the xv resource busy while we "play" the extra stream. Also, it does not have any pipeline overhead for display, you merely use a GdkPixbuf.
4) 0.8 has a very poor "keep display on eos" concept and no "display on startup". So the user experience as a GdkPixbuf will actually be better.
Comment 5 Edward Hervey 2006-01-24 14:12:44 UTC
Ronald, as you mention in your latest comment, single-image streams and video streams should be handled differently... "at a playbin level".

It is slightly problematic to do that separation at the demuxer level, since:
 * the image is not stricly metadata (take the case of jpeg in quicktime, which is a completely normal way of distributing images (the data)).
 * if somebody wants to use that image in a gstreamer pipeline (eventually with or without the other contained streams) he no longer can use that data in a pipeline (unless doing some complicated/uncommon pipeline that takes the GstBuffer from the Tag and injects it into another pipeline).

It is another case for playbin. Here you could figure out that a decodebin pad with video/x-raw-yuv or video/x-raw-yuv with a framerate of 0 (or without a framerate), is a single picture. Connect a fakesink with 'handoff', and playbin actually emits that GST_TAG_IMAGE.

Like that all playback applications using playbin (totem, rhythmbox, etc...) would see the image as a tag, and at the same time not stopping other applications from being able to use that image stream.

I just commited a couple of fixes to qtdemux 0.10 that makes it create that extra pad with framerate=(GstFraction)0/1.

The idea of GST_TAG_IMAGE is really good though, something tells me Jan is going to implement it in 0.10 core/id3demux :)
Comment 6 Ronald Bultje 2006-01-24 14:50:43 UTC
I didn't say "should", it is just one of the many ways to implement it.

Doing this as a fakesink is problematic since we need a way to let the application know the caps (size, video format) of the decoded image (RGB? YUV? etc.). Therefore, letting the app figure it out, they already have decoders, is as easy. Practically, this may come down to having the app setup another gst pipeline to decode it, that's up to the app. Gdkpixbuf provides better convenience functions, so let's just use that.

Reasons why I think we should do this at the demuxer level:
* it is not video. It is not. It's a single image. we should not give "users" the impression that they can switch between the video and this image stream. That is not possible.
* you could make a reasonable, though not perfect, point that this really is a tag, especially in the case of id3, but the qtdemux single images aren't far off from the id3 case.
* doing it this way provides a unified API for the same thing for a mp3 file with id3 tags and a aac file in a quicktime container, which is very convenient and eactly the goal of gst.

It's not perfect, but hey, nothing is. At least it works.
Comment 7 Edward Hervey 2006-01-24 15:28:50 UTC
(In reply to comment #6)
> I didn't say "should", it is just one of the many ways to implement it.
> 
> Doing this as a fakesink is problematic since we need a way to let the
> application know the caps (size, video format) of the decoded image (RGB? YUV?
> etc.). Therefore, letting the app figure it out, they already have decoders, is
> as easy.

  The caps are buffer properties in 0.10, so that's one problem solved. As for getting a 'useable' buffer, you could also do 'ffmpegcolorspace ! video/x-raw-rgb,... ! fakesink' to get a buffer that would be more convenient to convert to a GdkPixbuf.
  MyPixbuf = g_object_new (GDK_TYPE_PIXBUF, "height", bufferheight, "width", bufferwidth, "pixels", GST_BUFFER_DATA(buffer), ...);

> Practically, this may come down to having the app setup another gst
> pipeline to decode it, that's up to the app. Gdkpixbuf provides better
> convenience functions, so let's just use that.
> 
> Reasons why I think we should do this at the demuxer level:
> * it is not video. It is not. It's a single image. we should not give "users"
> the impression that they can switch between the video and this image stream.
> That is not possible.

  Images aren't video. But the only difference is one has only one frame, and the other more than one. The problem is of course for end users, which is why we shouldn't impede developers who know the difference between the two and who should decide what to do with it, by doing the fix in playbin.

> * you could make a reasonable, though not perfect, point that this really is a
> tag, especially in the case of id3, but the qtdemux single images aren't far
> off from the id3 case.

  I completely agree for id3. An image in id3 is a metadata.

> * doing it this way provides a unified API for the same thing for a mp3 file
> with id3 tags and a aac file in a quicktime container, which is very convenient
> and eactly the goal of gst.

  Convenience is *the* goal of gst ? Or is it the goal of playbin ? I thought being versatile was the goal of a multimedia framework, and it was up to convenience elements to simplify life of developers *IF* they want too.

> 
> It's not perfect, but hey, nothing is. At least it works.
> 

  I don't think we'd be hacking on anything if everything was perfect in gst :) Once I have time I'll implement that behaviour in decodebin 0.10 and see how far it can go.
Comment 8 Ronald Bultje 2006-01-24 16:17:57 UTC
Do whatever you wish in whatever corporate branch you work on. This patch will go in 0.8 unless someone has relevant points for improvement, since you appear to be missing the key point: it is 0.8. Until 0.10 is workable, I don't care shit about 0.10. The patch is merely here to inform you of the fact that a new feature was added to 0.8 so you can easily forwardport port it to 0.10.
Comment 9 Jan Schmidt 2006-01-24 16:31:46 UTC
"The idea of GST_TAG_IMAGE is really good though, something tells me Jan is
going to implement it in 0.10 core/id3demux :)"

Damn, what gave me away? Perhaps it was when I suggested it in conversation back in December when I was reworking id3demux ;)

I agree with Ronald that it's nice for applications to have playbin emit these images as a tag, but I also agree with Edward's point of view that straight playback isn't the only application scenario, and there's a use case for having the image data available on a pad as 'just another stream' from the demuxer.

So, I think we should do both - for 0.8, have the demuxers send tags (for which the attached patches look good), and in 0.10 take the longer route of putting code in playbin for it.


Comment 10 Ronald Bultje 2006-01-30 13:45:40 UTC
applied to 0.8 CVS.