After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 166783 - [PATCH] New plugin: imagemixer
[PATCH] New plugin: imagemixer
Status: RESOLVED FIXED
Product: GStreamer
Classification: Platform
Component: gst-plugins
git master
Other Linux
: Normal enhancement
: 0.8.11
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks: 166785
 
 
Reported: 2005-02-09 13:38 UTC by Gergely Nagy
Modified: 2005-07-01 16:53 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Patch implementing the element (27.95 KB, patch)
2005-02-09 13:40 UTC, Gergely Nagy
none Details | Review
New imagemixer implementation (30.62 KB, patch)
2005-03-02 14:15 UTC, Gergely Nagy
none Details | Review
Alternative implementation (33.15 KB, patch)
2005-06-04 18:11 UTC, Ronald Bultje
none Details | Review
update (37.41 KB, patch)
2005-06-04 20:19 UTC, Ronald Bultje
none Details | Review
update2 (40.66 KB, patch)
2005-06-04 21:57 UTC, Ronald Bultje
none Details | Review
update (40.65 KB, patch)
2005-06-05 09:07 UTC, Ronald Bultje
none Details | Review
update (40.84 KB, patch)
2005-06-05 11:21 UTC, Ronald Bultje
none Details | Review
update (42.45 KB, patch)
2005-06-05 14:49 UTC, Ronald Bultje
none Details | Review
update (42.65 KB, patch)
2005-06-11 19:15 UTC, Ronald Bultje
none Details | Review
update (43.49 KB, patch)
2005-06-19 11:27 UTC, Ronald Bultje
none Details | Review

Description Gergely Nagy 2005-02-09 13:38:08 UTC
There is this videomixer plugin, which works most of the time. However, there
are a series of problems with it:

1) It uses custom pads, so that it can have properties on them
2) Only supports one blending mode (though some more are in the sources, just
not enabled)
3) It is too complex

imagemixer is a simplification of videomixer. It only has two sink pads, and the
merge position can be set as element properties. Furthermore, 11 different
blending modes are supported, and more can be added easily, if so need be. It is
also more resource friendly, as it reuses the background buffer, and merges the
foreground onto that, without making a copy (except, when the buffer is read
only, in which case a copy is made before).

A patch against CVS head as of a few minutes ago will be attached shortly.
Another one, which redoes videomixer to operate on top of imagemixer might come
later.

An example pipeline might look like: 

videotestsrc ! video/x-raw-yuv,width=160,height=120,framerate=(double)10 ! alpha
 name=vtest v4lsrc autoprobe=0 autoprobe-fps=0 !
video/x-raw-yuv,width=320,height=240,framerate=(double)10 ! alpha name=v4l v4l.
! imagemixer xpos=80 ypos=40 name=mixer vtest. ! mixer. mixer. !
ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=test.ogg
Comment 1 Gergely Nagy 2005-02-09 13:40:32 UTC
Created attachment 37240 [details] [review]
Patch implementing the element

This is my current version of the element, and WorksForMe(tm). In case the
patch gets applied, please preserve the comment block (or at least the arch-tag
part of it) at the end of the sources, otherwise my version control system will
get confused when I sync with CVS. Thanks!
Comment 2 Gergely Nagy 2005-02-17 00:32:15 UTC
I have a newer version, but there is still room for improvement, so I'm
continuing working on it. Poke me for a fresh update - I don't want to pollute
bugzilla with zillion versions.
Comment 3 Gergely Nagy 2005-03-02 14:15:32 UTC
Created attachment 38152 [details] [review]
New imagemixer implementation

Changes since the first version:
- Supports the opacity property
- Optimised merge
  + Calculate some paramaters only when caps or properties change
  + Faster blending
- Various cleanups all over the place
Comment 4 Ronald Bultje 2005-06-04 10:57:09 UTC
Gergely, I want to use this as generic image mixer. I've looked at videomixer,
it doesn't suffice. However, this doesn't either. What I need comes closer to
this, but it isn't just yet. Here's some requirements:

* one "main" input pad, output pad - formats between those linked. Basically
your bg_pad. I want those named sink/src because when only those two are used,
it acts as a passthrough. Always pad, etc. Required. Supported formats,
preferrably anything, but at least a few of the main YUV formats (e.g. I420,
YUY2, AYUV, Y444), for the obvious performance reasons (I don't want to convert
anything from/to AYUV just for this). It'd be nice if it did RGB, too, but this
is dreaming.
* request subpicture_sink_%d pads, with a slave format, possibly depending on
the input format (because this eases coding - no color conversion in this
element). Negotiation can be delayed. For blit, go over this list, blend over
main input (as your algorhythm already does), etc. It'd be nice to allow
blending AYUV over YUY2 (for performance), but not strictly required, because I
understand this will heavily complicate the code. At the very least, I need to
be able to blend Y444 and AYUV over both of their contraries. I also need
multiple input subpicture pads, one is not necessarily enough.
* Filler event support - definitely required. I may port the
ext/pango/gsttextoverlay.c algorhythm over to here, since it's well-tested and
known to work fine.
* The subpicture_sink_%d pads need properties for alignment/positioning. I'm
currently thinking of v-align, h-align (enum; bottom, top, middle) and v-offset,
h-offset (int; default e.g. 12. This is conveniently ignored for the 'middle'
value). For this, they need to be subclasses of GstRealPad, like in videomixer.
* I'm ok with keeping the blending implementations, although I won't need them
myself... Optimizations here would rock, of course.

Here's a few of the scenario's that I'm thinking of:
* blend a (static) JPEG image over video
* blend a (static) PNG image (with alpha) over video
* blend a scaled mini-video top-left over another video
* blend a rendered pango-text over a video
* blend a DVD subtitle over a video

I'm willing to do a lot of coding for this. But it needs to just work in the
end; it'll require quit a bit of coding, as far as I can see right now. It'd be
nice to have some easy test-cases for all of this (this can't be all too hard).
Comment 5 Ronald Bultje 2005-06-04 18:11:12 UTC
Created attachment 47238 [details] [review]
Alternative implementation

This is what I had in mind:
* state machine copied from pango/textoverlay
* subclass pad stuff pulled from videomixer
* I'd like to use your blending code in here
* mix it together, add some pepper, salt and tape and that's this

The blending code is unfinished, as you can see in gst_image_mixer_do_mix (),
but it's a start... :). It shows the purpose. Works for blending AYUV over
I420, which is what happens if you do this:

gst-launch filesrc location=~/samples/movies/world_of_warcraft_m480.ogg !
oggdemux name=d ! { queue ! theoradec ! ffmpegcolorspace ! { queue ! imagemixer
! ffmpegcolorspace ! ximagesink } } d. ! { queue ! vorbisdec ! audioconvert !
audioscale ! alsasink device=dmix } { filesrc location=/tmp/ximagesrc.png
blocksize=100000000 ! pngdec ! ffmpegcolorspace ! videoscale !
video/x-raw-yuv,width=120,height=80 ! queue ! imagemixer0.subpicture_sink_%d }

The nice thing is that this design fullfills all design goals stated above, and
is thus capable of replacing the current rather limited textoverlay-only
implementation in playbin.
Comment 6 Ronald Bultje 2005-06-04 20:19:59 UTC
Created attachment 47241 [details] [review]
update

* finishes all blit-to-I420 implementations (AYUV, Y444, I420, YUY2). Tests
same as above, but also works for JPEG images (I420), in addition to PNG
(AYUV).

Next, I'd like to separate the draw-to-area implementations in their own
functions, and add blend-functions as in Gergely's version. Then, it's pretty
much finished.
Comment 7 Ronald Bultje 2005-06-04 21:57:38 UTC
Created attachment 47245 [details] [review]
update2

* same as above, now with all subsample modes implemented.

Left todo: different blend modes in blend.[ch]. We may want to move the various
pixel-access methods there, too.
Comment 8 Ronald Bultje 2005-06-05 09:07:32 UTC
Created attachment 47256 [details] [review]
update

* this one also supports empty fillers as emitted by the queue element.
Comment 9 Ronald Bultje 2005-06-05 09:17:47 UTC
gst-launch-0.8 dvdreadsrc ! dvddemux name=d d. ! mpeg2dec ! queue
max-size-time=0 max-size-buffers=0 max-size-bytes=100000000 ! { imagemixer
name=i ! videoscale ! ffmpegcolorspace ! ximagesink } d.subpicture_02 !
dvdsubdec ! queue max-size-time=0 max-size-buffers=0 block-timeout=40000000 !
i.subpicture_sink_%d

That now renders DVD subtitles for me. Alpha values are wrong, and it's slow.
The block timeout is needed because the pad is only created when we see the
first subtitle, so the pad is dead (and causes hangs) before that. I don't know
how to fix that yet.
Comment 10 Ronald Bultje 2005-06-05 11:21:01 UTC
Created attachment 47261 [details] [review]
update

oops, did quilt refresh on the wrong patch, so I actually attached the old
patch again. Here's a second try.
Comment 11 Ronald Bultje 2005-06-05 14:49:50 UTC
Created attachment 47273 [details] [review]
update

This one separates alpha-parsing, so that if this is 0, we can skip the rest.
Performance seems to improve quite a bit based on this. Still not all too cool
though.
Comment 12 Ronald Bultje 2005-06-11 19:15:22 UTC
Created attachment 47622 [details] [review]
update

This update makes the whole loop-part operate in int-only mode, which makes it
slightly faster, still. Dennis also told me to omit a bunch of if() checks in
the inner loops, but I see very little speed gain (or, well, nothing
measurable) if I do that. I guess I need a better testcase for that. I'll
experiment with callgrind again later tonight.
Comment 13 Luca Ognibene 2005-06-15 18:55:20 UTC
doesn't work well for me :(

1) doesn't compile because it needs blend.c . I've removed the file from the
Makefile and then compiles

2) This pipeline:
 gst-launch-0.8 videotestsrc ! video/x-raw-yuv,framerate=10.0 ! ffmpegcolorspace
! { queue ! imagemixer ! ffmpegcolorspace ! ximagesink } { filesrc
location=~/Temp/screenshot.png blocksize=100000000 ! pngdec ! ffmpegcolorspace !
videoscale ! video/x-raw-yuv,width=120,height=80 ! queue !
imagemixer0.subpicture_sink_%d }
 works fine as output but uses 100% cpu (even if i use 1.0 as framerate so it
doesn't seems slow, it seems a loop somewhere).

any idea? maybe this patch depends on other patches? 
I'm using cvs of some week ago, compiling current cvs right now.. will tell you
if it works better!
Comment 14 Ronald Bultje 2005-06-15 21:23:20 UTC
I've noticed the 100% cpu, but it's only gst-launch. It works fine in Totem or so.
Comment 15 Luca Ognibene 2005-06-16 19:04:00 UTC
Updated core cvs -> always 100% cpu usage..
I've tried to spend some time looked at it and i've found this:

 - after pushing the first buffer the elements in the filesrc thread becomes
"inactive":
/pipeline0/thread1/filesrc0.src: active = FALSE
/pipeline0/thread1/videoscale0.sink: active = FALSE
/pipeline0/thread1/videoscale0.src: active = FALSE
/pipeline0/thread1/ffmpegcolorspace2.sink: active = FALSE
/pipeline0/thread1/ffmpegcolorspace2.src: active = FALSE
/pipeline0/thread1/pngdec0.sink: active = FALSE
/pipeline0/thread1/pngdec0.src: active = FALSE
of course the queue element is still active.

Looking at the GST_DEBUG=scheduler:5 log i can see a lot of the following code
between two frame:

LOG   (0x81e0148 - 310818:57:08.322380000)       scheduler(17686)
gstoptimalscheduler.c(2793):gst_opt_scheduler_iterate: not scheduling disabled
chain 0x81e46f8
LOG   (0x81e0148 - 310818:57:08.365833000)       scheduler(17686)
gstoptimalscheduler.c(507):unref_chain: unref chain 0x81e46f8 3->2
DEBUG (0x81e0148 - 310818:57:08.366330000)       scheduler(17686)
gstoptimalscheduler.c(2775):gst_opt_scheduler_iterate:<optscheduler1> iterating
LOG   (0x81e0148 - 310818:57:08.366664000)       scheduler(17686)
gstoptimalscheduler.c(497):ref_chain: ref chain 0x81e46f8 2->3
LOG   (0x81e0148 - 310818:57:08.366981000)       scheduler(17686)
gstoptimalscheduler.c(2793):gst_opt_scheduler_iterate: not scheduling disabled
chain 0x81e46f8
LOG   (0x81e0148 - 310818:57:08.399260000)       scheduler(17686)
gstoptimalscheduler.c(507):unref_chain: unref chain 0x81e46f8 3->2
DEBUG (0x81e0148 - 310818:57:08.399759000)       scheduler(17686)
gstoptimalscheduler.c(2775):gst_opt_scheduler_iterate:<optscheduler1> iterating
LOG   (0x81e0148 - 310818:57:08.400115000)       scheduler(17686)
gstoptimalscheduler.c(497):ref_chain: ref chain 0x81e46f8 2->3


So it seems a bug in the scheduler.. :( I can provide a some logs file if you
are interested. To confirm this idea this pipeline works fine:

gst-launch-0.8 videotestsrc ! video/x-raw-yuv,framerate=1.0 ! ffmpegcolorspace !
{ queue ! imagemixer ! ffmpegcolorspace ! ximagesink } { filesrc
location=~/Temp/screenshot.png blocksize=100000000 ! pngdec ! ffmpegcolorspace !
videoscale ! video/x-raw-yuv,width=120,height=80 ! freeze ! queue !
imagemixer0.subpicture_sink_%d }

(i've just added the freeze element)

well, the pipeline doesn't really work fine.. it displays the subpicture only in
the first frame but this can be a bug/feature in imagemixer. The important thing
is that it doesn't use 100% cpu.

Hope this can help.
Comment 16 Ronald Bultje 2005-06-19 11:27:38 UTC
Created attachment 47984 [details] [review]
update

* stillframe handling
Comment 17 Ronald Bultje 2005-07-01 16:53:23 UTC
applied.