After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 704910 - sync/ Error creating thread/Segmentation fault
sync/ Error creating thread/Segmentation fault
Status: RESOLVED INCOMPLETE
Product: GStreamer
Classification: Platform
Component: dont know
1.1.2
Other Linux
: Normal normal
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2013-07-26 03:02 UTC by troy
Modified: 2013-11-22 08:53 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description troy 2013-07-26 03:02:08 UTC
hi all:
   I recently development a project by python gstreamer.It is an recording server. I use udpsrc and rtpbin to recieve audio stream and video stream ,and then push them to 'mp4mux' ,finaly push the stream to 'filesink'. .The pipeline is probably like this :
 udpsrc-->rtpbin--->rtph264depay--->h264parse-->mp4mux(video_0)--
                                                                 |-->***.mp4
 udpsrc-->rtpbin--->...--->faac---------------->mp4mux(audio_0)--

   And I use 'sip' message to control the pipelne,e.g. when I use 'Linphone' to call the server ,the program will set the pipeline to PLAY state. 
   There are two serious problems:

1.audio/video not sync. The '**.mp4' is sync only in 'vlc' player,others player audio will delay about 1s.But when I replace the 'mp4mux' to 'matroskamux',and get the '**.mkv',it is sync with every players.Then I test 'avimux','3gpmux'...they are the same with 'mp4mux',then I am confused...

2.I write an test program to test the server.When I simulated 40  'Linphone' to call the server ,an error accured :
(python:15406): GStreamer-WARNING **: failed to create thread: Error creating thread: Resource temporarily unavailable  
Segmentation fault (core dumped)


Can you help me ?

my programer is as follow:

#!/usr/bin/env python
# -=- encoding: utf-8 -=-
################ VIDEO RECEIVER

import gi
gi.require_version('Gst', '1.0')

from gi.repository import GObject, Gst
import time, socket, fcntl, struct
import os

#GObject.threads_init()
Gst.init(None)

record_dir = ''



class Recorder:
    def __init__(self, callid, has_audio, has_video):
        self.callid=callid
        self.has_video = has_video          #flag to judge if it has video stream 
        self.has_audio = has_audio          #flag to judge if it has audio stream
	self.call_session = None
	self.session_callback = None

        # create Pipeline and rtpbin
        self.pipeline = Gst.Pipeline()
        self.rtpbin = Gst.ElementFactory.make('rtpbin', 'rtpbin')
        self.audio_dealing()
        if self.has_video == True :         
            self.video_dealing()

    def video_dealing(self):
        # create elements
        self.udpsrc_rtpin_video = Gst.ElementFactory.make('udpsrc', 'udpsrc0')
        self.udpsrc_rtcpin_video = Gst.ElementFactory.make('udpsrc', 'udpsrc1')
        self.udpsink_rtcpout_video = Gst.ElementFactory.make('udpsink', 'udpsink0')
        self.rtph264depay = Gst.ElementFactory.make('rtph264depay', 'rtpdepay')
        self.h264parse = Gst.ElementFactory.make('h264parse','h264parse')

        #add the elements into the pipeline
        self.pipeline.add(self.udpsrc_rtpin_video)
        self.pipeline.add(self.udpsrc_rtcpin_video)
        self.pipeline.add(self.rtph264depay)
        self.pipeline.add(self.h264parse)

        # Set properties
        self.udpsrc_rtpin_video.set_property('caps', Gst.caps_from_string('application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264'))
        self.rtp_port_v = self.get_rtp_port_video()
        self.udpsrc_rtpin_video.set_property('port',self.rtp_port_v)
        self.udpsrc_rtcpin_video.set_property('port',self.rtp_port_v+1)

        # Link elements 
        self.udpsrc_rtpin_video.link_pads('src',self.rtpbin , 'recv_rtp_sink_0')
        self.udpsrc_rtcpin_video.link_pads('src', self.rtpbin, 'recv_rtcp_sink_0')
        self.rtph264depay.link(self.h264parse)

    def audio_dealing(self):
        # create elements
        self.udpsrc_rtpin_audio = Gst.ElementFactory.make('udpsrc', 'udpsrc2')
        self.udpsrc_rtcpin_audio = Gst.ElementFactory.make('udpsrc', 'udpsrc3')
        self.udpsink_rtcpout_audio = Gst.ElementFactory.make('udpsink', 'udpsink1')		
        self.pcmudepay = Gst.ElementFactory.make('rtppcmudepay','rtppcmudepay')
        self.mulawdec	= Gst.ElementFactory.make('mulawdec','mulawdec')
        self.audioresample = Gst.ElementFactory.make('audioresample','audioresample')
        self.audioenc = Gst.ElementFactory.make('faac','faac')

        #add the elements into the pipeline
        self.pipeline.add(self.rtpbin)
        self.pipeline.add(self.udpsrc_rtpin_audio)
        self.pipeline.add(self.udpsrc_rtcpin_audio)
        self.pipeline.add(self.pcmudepay)
        self.pipeline.add(self.mulawdec)
        self.pipeline.add(self.audioresample)
        self.pipeline.add(self.audioenc)

        # Set properties
        self.rtpbin.set_property('latency', 400)
        self.udpsrc_rtpin_audio.set_property('caps', Gst.caps_from_string('application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)PCMU'))
        self.rtp_port_a = self.get_rtp_port_audio()
        self.udpsrc_rtpin_audio.set_property('port',self.rtp_port_a)
        self.udpsrc_rtcpin_audio.set_property('port',self.rtp_port_a+1)

        # Link elements 
        self.udpsrc_rtpin_audio.link_pads('src',self.rtpbin , 'recv_rtp_sink_1')
        self.udpsrc_rtcpin_audio.link_pads('src', self.rtpbin, 'recv_rtcp_sink_1')
        self.pcmudepay.link(self.mulawdec)
        self.mulawdec.link(self.audioresample)		
        self.audioresample.link(self.audioenc)

    def start_record(self):
        self.mp4mux_to_filesink()
        self.start_stream()
        print "Started mp4 recording..."

    def mp4mux_to_filesink(self):
        # Create elements
        if self.has_video == True:
            self.q_v = Gst.ElementFactory.make('queue',None)
        self.q_a = Gst.ElementFactory.make('queue',None)
        self.mp4mux = Gst.ElementFactory.make('mp4mux', 'mp4mux')
        self.filesink = Gst.ElementFactory.make('filesink', 'filesink')

        # Set properties
        global record_dir
        curr_time = time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time()))
        filename = record_dir + str(self.callid) + '-' + curr_time + '.mp4'
        self.filesink.set_property('location', filename)
        self.filesink.set_property('sync','false')
        if self.has_video == True:
            self.q_v.set_property('leaky',1)
        self.q_a.set_property('leaky',1)

        # Add elements into pipeline
        if self.has_video == True:
            self.pipeline.add(self.q_v)
        self.pipeline.add(self.q_a)
        self.pipeline.add(self.mp4mux)
        self.pipeline.add(self.filesink)

        #link element
        if self.has_video == True:
            self.h264parse.link(self.q_v)
            self.q_v.link_pads('src',self.mp4mux,'video_0')
        self.audioenc.link(self.q_a)
        self.q_a.link_pads('src',self.mp4mux,'audio_0')
        self.mp4mux.link(self.filesink)


    def start_stream(self):
        # Set callback
        self.rtpbin.connect('pad-added', self.rtpbin_pad_added)
        self.bus = self.pipeline.get_bus()
        self.bus.add_signal_watch()
        self.bus.connect('message::eos',self.on_eos)
        self.bus.connect('message::element',self.on_timeout)

        # Set state to PLAYING
        self.pipeline.set_state(Gst.State.PLAYING)
        self.udpsink_rtcpout_audio.set_locked_state(Gst.State.PLAYING)
        if self.has_video == True:
            self.udpsink_rtcpout_video.set_locked_state(Gst.State.PLAYING)

    def stop_stream(self):
        print 'stop'
        self.pipeline.send_event(Gst.Event.new_eos())

    def on_eos(self,bus,msg):
        print "on_eos"
        bus.remove_signal_watch()
        self.pipeline.set_state(Gst.State.NULL)
        print "now shut down the pipeline"

    def on_timeout(self,bus,msg):
        t = msg.get_structure().get_name()
        if t == 'GstUDPSrcTimeout':
            self.close_sip_session()
            print "on_timeout"
            self.pipeline.send_event(Gst.Event.new_eos())

    def set_event_callback(self, call, cb):#add by tjh , set callback to close sip session
        self.call_session = call
	self.session_callback = cb

    def close_sip_session(self):#add by tjh , to close sip session...
        self.session_callback(self.call_session, 'close')

    def rtpbin_pad_added(self,obj, pad):
        if self.has_video == False:
            a_pad = self.pcmudepay.get_static_pad("sink")
            pad.link(a_pad)
            print "audio stream is coming"
        else:
            pad_name = pad.get_name()
            if pad_name[0:14] == 'recv_rtp_src_0':
                v_pad = self.rtph264depay.get_static_pad("sink")
                pad.link(v_pad)
                print "Video stream is coming"
            elif pad_name[0:14] == 'recv_rtp_src_1':
                a_pad = self.pcmudepay.get_static_pad("sink")
                pad.link(a_pad)
                print "audio stream is coming"
        self.udpsrc_rtpin_audio.set_property('timeout', 5000000000)

    def get_rtp_port_video(self):
        self.udpsrc_rtpin_video.set_property('port', 0)
        self.udpsrc_rtpin_video.set_state(Gst.State.PAUSED)
        port_video = self.udpsrc_rtpin_video.get_property('port')
        self.udpsrc_rtpin_video.set_state(Gst.State.NULL)
        return port_video

    def get_rtp_port_audio(self):
        self.udpsrc_rtpin_audio.set_property('port', 0)
        self.udpsrc_rtpin_audio.set_state(Gst.State.PAUSED)
        port_audio = self.udpsrc_rtpin_audio.get_property('port')
        self.udpsrc_rtpin_audio.set_state(Gst.State.NULL)		
        return port_audio

    def get_local_address(self):
        rtp_port_a = self.rtp_port_a
        if self.has_video == True:
            rtp_port_v = self.rtp_port_v
            address = (get_ip_address('eth0'),rtp_port_a,rtp_port_a+1,rtp_port_v, rtp_port_v+1)
        else :
            address = (get_ip_address('eth0'),rtp_port_a,rtp_port_a+1,0,0)
        return address		

def get_ip_address(ifname):
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    return socket.inet_ntoa(fcntl.ioctl(
        s.fileno(),
        0x8915,  # SIOCGIFADDR
        struct.pack('256s', ifname[:15])
        )[20:24])
Comment 1 Sebastian Dröge (slomo) 2013-07-26 07:00:23 UTC
(In reply to comment #0)
> hi all:
>    I recently development a project by python gstreamer.It is an recording
> server. I use udpsrc and rtpbin to recieve audio stream and video stream ,and
> then push them to 'mp4mux' ,finaly push the stream to 'filesink'. .The pipeline
> is probably like this :
>  udpsrc-->rtpbin--->rtph264depay--->h264parse-->mp4mux(video_0)--
>                                                                  |-->***.mp4
>  udpsrc-->rtpbin--->...--->faac---------------->mp4mux(audio_0)--
> 
>    And I use 'sip' message to control the pipelne,e.g. when I use 'Linphone' to
> call the server ,the program will set the pipeline to PLAY state. 
>    There are two serious problems:
> 
> 1.audio/video not sync. The '**.mp4' is sync only in 'vlc' player,others player
> audio will delay about 1s.But when I replace the 'mp4mux' to 'matroskamux',and
> get the '**.mkv',it is sync with every players.Then I test
> 'avimux','3gpmux'...they are the same with 'mp4mux',then I am confused...

Can you attach such a recording as MP4 (with the A/V sync issue), and as MKV (with A/V correct)? You probably want to put queues right before both muxer sinkpads btw.

3gpmux/qtmux/mp4mux are all the same code, just outputting a different variant btw.
 
> 2.I write an test program to test the server.When I simulated 40  'Linphone' to
> call the server ,an error accured :
> (python:15406): GStreamer-WARNING **: failed to create thread: Error creating
> thread: Resource temporarily unavailable  
> Segmentation fault (core dumped)

Can you get with gdb a) a backtrace of the segfault and b) of the warning (for that set G_DEBUG=fatal_warnings)
Comment 2 troy 2013-07-26 07:41:27 UTC
(In reply to comment #1)
> (In reply to comment #0)
> > hi all:
> >    I recently development a project by python gstreamer.It is an recording
> > server. I use udpsrc and rtpbin to recieve audio stream and video stream ,and
> > then push them to 'mp4mux' ,finaly push the stream to 'filesink'. .The pipeline
> > is probably like this :
> >  udpsrc-->rtpbin--->rtph264depay--->h264parse-->mp4mux(video_0)--
> >                                                                  |-->***.mp4
> >  udpsrc-->rtpbin--->...--->faac---------------->mp4mux(audio_0)--
> > 
> >    And I use 'sip' message to control the pipelne,e.g. when I use 'Linphone' to
> > call the server ,the program will set the pipeline to PLAY state. 
> >    There are two serious problems:
> > 
> > 1.audio/video not sync. The '**.mp4' is sync only in 'vlc' player,others player
> > audio will delay about 1s.But when I replace the 'mp4mux' to 'matroskamux',and
> > get the '**.mkv',it is sync with every players.Then I test
> > 'avimux','3gpmux'...they are the same with 'mp4mux',then I am confused...
> 
> Can you attach such a recording as MP4 (with the A/V sync issue), and as MKV
> (with A/V correct)? You probably want to put queues right before both muxer
> sinkpads btw.
> 
> 3gpmux/qtmux/mp4mux are all the same code, just outputting a different variant
> btw.
> 
> > 2.I write an test program to test the server.When I simulated 40  'Linphone' to
> > call the server ,an error accured :
> > (python:15406): GStreamer-WARNING **: failed to create thread: Error creating
> > thread: Resource temporarily unavailable  
> > Segmentation fault (core dumped)
> 
> Can you get with gdb a) a backtrace of the segfault and b) of the warning (for
> that set G_DEBUG=fatal_warnings)

Thank you ,but there are queues before mp4mux.And the file is too big to attach
Comment 3 Sebastian Dröge (slomo) 2013-07-26 07:54:07 UTC
You could upload it to any of these many upload services, or just don't record for that long
Comment 4 troy 2013-07-26 08:11:52 UTC
(In reply to comment #1)
> (In reply to comment #0)
> > hi all:
> >    I recently development a project by python gstreamer.It is an recording
> > server. I use udpsrc and rtpbin to recieve audio stream and video stream ,and
> > then push them to 'mp4mux' ,finaly push the stream to 'filesink'. .The pipeline
> > is probably like this :
> >  udpsrc-->rtpbin--->rtph264depay--->h264parse-->mp4mux(video_0)--
> >                                                                  |-->***.mp4
> >  udpsrc-->rtpbin--->...--->faac---------------->mp4mux(audio_0)--
> > 
> >    And I use 'sip' message to control the pipelne,e.g. when I use 'Linphone' to
> > call the server ,the program will set the pipeline to PLAY state. 
> >    There are two serious problems:
> > 
> > 1.audio/video not sync. The '**.mp4' is sync only in 'vlc' player,others player
> > audio will delay about 1s.But when I replace the 'mp4mux' to 'matroskamux',and
> > get the '**.mkv',it is sync with every players.Then I test
> > 'avimux','3gpmux'...they are the same with 'mp4mux',then I am confused...
> 
> Can you attach such a recording as MP4 (with the A/V sync issue), and as MKV
> (with A/V correct)? You probably want to put queues right before both muxer
> sinkpads btw.
> 
> 3gpmux/qtmux/mp4mux are all the same code, just outputting a different variant
> btw.
> 
> > 2.I write an test program to test the server.When I simulated 40  'Linphone' to
> > call the server ,an error accured :
> > (python:15406): GStreamer-WARNING **: failed to create thread: Error creating
> > thread: Resource temporarily unavailable  
> > Segmentation fault (core dumped)
> 
> Can you get with gdb a) a backtrace of the segfault and b) of the warning (for
> that set G_DEBUG=fatal_warnings)

(In reply to comment #1)
> (In reply to comment #0)
> > hi all:
> >    I recently development a project by python gstreamer.It is an recording
> > server. I use udpsrc and rtpbin to recieve audio stream and video stream ,and
> > then push them to 'mp4mux' ,finaly push the stream to 'filesink'. .The pipeline
> > is probably like this :
> >  udpsrc-->rtpbin--->rtph264depay--->h264parse-->mp4mux(video_0)--
> >                                                                  |-->***.mp4
> >  udpsrc-->rtpbin--->...--->faac---------------->mp4mux(audio_0)--
> > 
> >    And I use 'sip' message to control the pipelne,e.g. when I use 'Linphone' to
> > call the server ,the program will set the pipeline to PLAY state. 
> >    There are two serious problems:
> > 
> > 1.audio/video not sync. The '**.mp4' is sync only in 'vlc' player,others player
> > audio will delay about 1s.But when I replace the 'mp4mux' to 'matroskamux',and
> > get the '**.mkv',it is sync with every players.Then I test
> > 'avimux','3gpmux'...they are the same with 'mp4mux',then I am confused...
> 
> Can you attach such a recording as MP4 (with the A/V sync issue), and as MKV
> (with A/V correct)? You probably want to put queues right before both muxer
> sinkpads btw.
> 
> 3gpmux/qtmux/mp4mux are all the same code, just outputting a different variant
> btw.
> 
> > 2.I write an test program to test the server.When I simulated 40  'Linphone' to
> > call the server ,an error accured :
> > (python:15406): GStreamer-WARNING **: failed to create thread: Error creating
> > thread: Resource temporarily unavailable  
> > Segmentation fault (core dumped)
> 
> Can you get with gdb a) a backtrace of the segfault and b) of the warning (for
> that set G_DEBUG=fatal_warnings)
Thank you and the gdb information is as follw ,what the problem is :

Program received signal SIGSEGV, Segmentation fault.

Thread 3148143424 (LWP 4001)

  • #0 __memset_sse2
    at ../sysdeps/i386/i686/multiarch/memset-sse2.S line 364
  • #1 faacEncOpen
    from /usr/lib/libfaac.so.0
  • #2 gst_faac_open_encoder
    at gstfaac.c line 479
  • #3 gst_faac_configure_source_pad
    at gstfaac.c line 555
  • #4 gst_faac_set_format
    at gstfaac.c line 395
  • #5 gst_audio_encoder_sink_setcaps
    at gstaudioencoder.c line 1282
  • #6 gst_audio_encoder_chain
    at gstaudioencoder.c line 1055
  • #7 gst_pad_chain_data_unchecked
    at gstpad.c line 3718
  • #8 gst_pad_push_data
    at gstpad.c line 3948
  • #9 gst_base_transform_chain
    at gstbasetransform.c line 2215
  • #10 gst_pad_chain_data_unchecked
    at gstpad.c line 3718
  • #11 gst_pad_push_data
    at gstpad.c line 3948
  • #12 gst_audio_decoder_push_forward
    at gstaudiodecoder.c line 873
  • #13 gst_audio_decoder_output
    at gstaudiodecoder.c line 949
  • #14 gst_audio_decoder_finish_frame
    at gstaudiodecoder.c line 1152
  • #15 gst_mulawdec_handle_frame
    at mulaw-decode.c line 129
  • #16 gst_audio_decoder_handle_frame
    at gstaudiodecoder.c line 1199
  • #17 gst_audio_decoder_push_buffers
    at gstaudiodecoder.c line 1297
  • #18 gst_audio_decoder_chain_forward
    at gstaudiodecoder.c line 1400
  • #19 gst_audio_decoder_chain
    at gstaudiodecoder.c line 1668
  • #20 gst_pad_chain_data_unchecked
    at gstpad.c line 3718
  • #21 gst_pad_push_data
    at gstpad.c line 3948
  • #22 gst_rtp_base_depayload_push
    at gstrtpbasedepayload.c line 607
  • #23 gst_rtp_base_depayload_chain
    at gstrtpbasedepayload.c line 355
  • #24 gst_pad_chain_data_unchecked
    at gstpad.c line 3718
  • #25 gst_pad_push_data
    at gstpad.c line 3948
  • #26 gst_proxy_pad_chain_default
    at gstghostpad.c line 128
  • #27 gst_pad_chain_data_unchecked
    at gstpad.c line 3718
  • #28 gst_pad_push_data
    at gstpad.c line 3948
  • #29 gst_rtp_pt_demux_chain
    at gstrtpptdemux.c line 446
  • #30 gst_pad_chain_data_unchecked
    at gstpad.c line 3718
  • #31 gst_pad_push_data
    at gstpad.c line 3948
  • #32 gst_rtp_jitter_buffer_loop
    at gstrtpjitterbuffer.c line 1949
  • #33 gst_task_func
    at gsttask.c line 316
  • #34 default_func
    at gsttaskpool.c line 70
  • #35 ??
    from /lib/i386-linux-gnu/libglib-2.0.so.0
  • #36 ??
    from /lib/i386-linux-gnu/libglib-2.0.so.0
  • #37 start_thread
    at pthread_create.c line 308
  • #38 clone
    at ../sysdeps/unix/sysv/linux/i386/clone.S line 130

Comment 5 Sebastian Dröge (slomo) 2013-07-26 08:29:13 UTC
Realistically this can only crash there because of memory corruption elsewhere before. Can you get a) a backtrace of that warning and b) run everything in valgrind to see what is happening there?
Comment 6 Olivier Crête 2013-07-26 09:07:12 UTC
Also, do you get any warnings or criticals while running it ?
Comment 7 troy 2013-07-26 10:36:47 UTC
(In reply to comment #6)
> Also, do you get any warnings or criticals while running it ?

Thank you,some times i got warning like this:

Traceback (most recent call last):
  • File "/root/Record-Server/recorder.py", line 424 in on_eos
    self.pipeline.set_state(Gst.State.NULL)
AttributeError: Recorder instance has no attribute 'pipeline'

(python:9201): GStreamer-CRITICAL **: 
Trying to dispose element pipeline10, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element
Comment 8 Sebastian Dröge (slomo) 2013-07-26 11:00:01 UTC
Well, that's obviously a bug in your code, both.


However could get a log with valgrind, ideally with --track-origins=yes?
Comment 9 Tobias Mueller 2013-11-22 08:53:10 UTC
Closing this bug report as no further information has been provided. Please feel free to reopen this bug if you can provide the information asked for.
Thanks!