GNOME Bugzilla – Bug 415360
enable creating stop-motion animations/slideshows using multi-image sources
Last modified: 2015-10-20 12:55:51 UTC
Read on the forum: Hi, [quote=LichiMan]Hi, I would like to suggest a feature request. It would be nice if you could import image sequence instead of images separately like it does now. I suppose you have this in your todo list, but if not, it would be nice so anyone could make videos from his renders from 3d applications or whatever. Thanks for make this application real. Miguel.[/quote] After thinking a bit more about this, it should be possible to create a multi-image python element in pitivi to handle this. Some element that could be inserted in a pipeline (and therefore used in the timeline), would list the files to be used, what framerate to output , ...
*** Bug 588246 has been marked as a duplicate of this bug. ***
Created attachment 153887 [details] [review] Set a default framerate for clips created from image files
Created attachment 153888 [details] [review] Fix a bug in mainwindow.py when calculating zoom ratio 02-pitivi-duration.patch fixes:
+ Trace 220589
ideal_zoom_ratio = ruler_width / float(timeline_duration / gst.SECOND)
When timeline_duration gets to be smaller than gst.SECOND, the denominator becomes 0 because that division is done with integers. The patch simply casts timeline_duration to a float so we get float division from the get-go. I think this affects normal users too, because integer division makes the timeline zooming very 'chunky'.
Created attachment 153889 [details] [review] Fix a bug in zoominterface.py when very small clips are processed. 03-pitivi-duration.patch fixes:
+ Trace 220590
return int((((ratio - cls.min_zoom) / cls.zoom_range) ** (1.0/3.0)) *
(Found after 02-pitivi-duration.patch is applied) When the ratio calculated above is smaller than cls.min_zoom, the numerator becomes negative. I fixed this by returning cls.max_zoom if ratio < cls.min_zoom. With a very short clip, zooming in all the way in is the right thing to do.
I've attached some patches that allow for the stop-motion animation use-case in a different way. These patches just let the user change the default length of a clip that is created from an image file. The default before the patch is a hard-coded 5 seconds. The patches allow the user to set a 'framerate' for clips created from image files, from which the actual length of the clips is calculated. The changes in 01-pitivi-duration-redo.patch create a new configuration variable to hold the default duration of clips imported from an image file, with a minimum of 15 ms (about 60 frames/second). Sixty frames/second is the upper bound for stop motion animation according to Wikipedia. I picked a default of 250 ms (4 frames/second) because it seemed like a good default for an amatuer stop-motion user without being too small for other users - a quarter-second frame should be visible enough for the user to figure out what is going on and lengthen the clip if needed.
Created attachment 164262 [details] [review] Replacement for 01-pitivi-duration.patch Someone requested an updated version of these patches, since the old ones no longer apply cleanly.
Created attachment 164263 [details] [review] Fix a bug in mainwindow.py when calculating zoom ratio
Comment on attachment 153889 [details] [review] Fix a bug in zoominterface.py when very small clips are processed. This patch doesn't seem to be needed anymore
Review of attachment 164263 [details] [review]: The contents of this method have changed in pitivi git recently, could you check if this patch is still required?
I've been looking a little bit at how openshot does this in classes/files.py, and it is far from perfect. We can do much better and simpler in terms of UI, workflow and code implementation. I suspect the "proper" way of doing this could be using gstreamer's multifilesrc: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-multifilesrc.html It provides a gst-launch example pipeline: gst-launch multifilesrc location="img.%04d.png" index=0 \ caps="image/png,framerate=\(fraction\)12/1" ! \ pngdec ! ffmpegcolorspace ! theoraenc ! oggmux ! \ filesink location="images.ogg" ...Which is pretty much what I'd like to do. When the user imports a sequence of image files, we should simply pre-convert them into a nice theora (or VP8) clip and import the resulting clip. I am guessing that this approach might provide better performance and usability. Otherwise, importing hundreds of image files into pitivi would: - destroy startup times - make us hit the kernel's "max open files" limit - possibly destroy the timeline canvas performance - require a lot of UI/design to manage those things - probably have crappy playback performance - certainly cause a ton of weird bugs with playback/rendering The other approach is to have some sort of internal representation of "metaclip" in pitivi, where a "bunch of image clips" are grouped as a single "image sequence clip", but I believe this would not only be crappy in terms of performance, but it would be a lot of unnecessary trouble to implement.
Actually, my idea of preconverting files in comment 10 is probably wrong. In the case where we're dealing with network storage, or with thousands of high-resolution images, it might be more expensive to preconvert than to access individual frames directly. Maybe GES could handle opening/closing files on the fly. One of the interesting prospects of not preconverting might be that you can seek by "jumping" to a particular frame with great precision. This could lead to better performance than dealing with a codec for seeking. I'm not sure about playback performance and system resources management though.
https://github.com/cfoch/pitivi branch: stopmotion it doesn't group images clips now
Continuing on the train of thought of comment #10 and comment #11, some more thoughts on how this could behave (from an IRC discussion, edited for clarity and brevity): César Fabián's old attempt in comment #12 was basically a script to take the images (sorted by some argument) and add them to the timeline as individual clips. While it *could* have worked (kind of) in theory, in practice it would be unmanageable for users... AFAIK, the "nice" and proper way would be to use GES' feature to deal with multi-image file sources: some work has been done in bug #719373 to make that functionality available in GES. So an image sequence would be a "special" type of clip: in the timeline it would behave like one normal, solid, single "video" clip, while in reality, in the backend, it's a sequence of images. It's simpler for users, less trouble for the timeline canvas, and you can then split/trim/add effects onto the whole sequence. In the media library, it would appear as a single clip too, but you could have special controls in this clip's properties dialog (the one you can access in the media library's toolbar), to allow adding/removing/reordering images (that's the 2nd, "harder" phase of the project).
This bug has been migrated to https://phabricator.freedesktop.org/T1832. Please use the Phabricator interface to report further bugs by creating a task and associating it with Project: Pitivi. See http://wiki.pitivi.org/wiki/Bug_reporting for details.