After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 732896 - iOS: AudioUnit on iOS produces echo with in/out streams
iOS: AudioUnit on iOS produces echo with in/out streams
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gst-plugins-good
1.x
Other Mac OS
: Normal normal
: git master
Assigned To: Ilya Konstantinov
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2014-07-08 12:26 UTC by Elio Francesconi
Modified: 2018-11-03 14:53 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description Elio Francesconi 2014-07-08 12:26:57 UTC
The osxaudiosrc plugin is Remote I/O unit it is not suited for VoIP purpose because no echo cancellator.
According to the Apple documentation, The Voice-Processing I/O unit extends the Remote I/O unit by adding acoustic echo cancelation for use in a VoIP or voice-chat application.
https://developer.apple.com/library/ios/documentation/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/AudioUnitHostingFundamentals/AudioUnitHostingFundamentals.html

To use Voice-rocessing I/O unit I did these changes:
in gstosxcoreaudioremoteio.c 

static gboolean
gst_core_audio_open_impl (GstCoreAudio * core_audio)
{
  return gst_core_audio_open_device (core_audio, kAudioUnitSubType_VoiceProcessingIO,
                                       "VoiceProcessingIO");
  //return gst_core_audio_open_device (core_audio, kAudioUnitSubType_RemoteIO,
  //    "RemoteIO");
}

This patch was not enough to solve the issue because I received status = -50 after some calls of AudioUnitRender in getosxaudiosrc.c, it seems AudioUnitRender modifies  buf->core_audio->recBufferList->mBuffers[0].mDataByteSize based on data returned, so before call AudioUnitRender it requires to set mDataByteSize today I’ve created mine osxaudio plugin with these changes, hardcoding "bytes per frame” and “channels” and solved my problem, I did’d check the buffer allocated because it’s big enough.

buf->core_audio->recBufferList->mNumberBuffers=1;
buf->core_audio->recBufferList->mBuffers[0].mDataByteSize=inNumberFrames*4/*where 4 is bytes per frame*/;
buf->core_audio->recBufferList->mBuffers[0].mNumberChannels=1 /*num of channels*/;
status = AudioUnitRender (buf->core_audio->audiounit, ioActionFlags,
      inTimeStamp, inBusNumber, inNumberFrames, buf->core_audio->recBufferList);

  if (status) {
    GST_WARNING_OBJECT (buf, "AudioUnitRender returned %d", (int) status);
    return status;
  }
Comment 1 Robert Swain 2014-09-17 08:23:18 UTC
Can you upload a patch for this?
Comment 2 Ilya Konstantinov 2015-02-20 02:42:32 UTC
Rob, I'd like to fix this. I've been considering a change to how osxaudio code is laid out.

Now we have gstosxcoreaudio.c including *either*
1) gstcoreaudiohal.c (osx only)
2) gstcoreaudioremoteio.c (ios only)

Yes, literally #include-ing.

I've been considering either:

1) Build 3 implementations (hal, remote io, vp io) as separate compilation units, then call impl->start_processing(...) etc. (where impl is a pointer to a struct of func ptrs)

2) If Remote IO and HAL aren't that different, maybe we should unify the code and #ifdef out the parts that don't build on iOS (since iOS is the subset).

What do you think? I don't want to rush into it without giving it a bit of thought from someone more experienced.
Comment 3 Ilya Konstantinov 2015-02-21 11:06:26 UTC
Oops, I was confused.
HAL is only available on OS X.
Remote IO and VoiceProcessing IO are only available on iOS.

My little ambitious project is irrelevant. :) I'll go on to implement a parameter to trigger VPIO on iOS.
Comment 4 Arun Raghavan 2015-02-21 11:59:45 UTC
Short term, having a selector to pick VoiceProcessingIO would be good. Long term, I'd like to get rid of the #include'ing as well, but this is more a cleanup that a feature.

What I looked up suggests that VoiceProcessingIO works on OS X and iOS, so would be good to keep that in mind while implementing this.
Comment 5 Ilya Konstantinov 2015-02-21 12:14:39 UTC
You seem to be right, VoiceProcessing IO is available on OS X:
https://developer.apple.com/library/mac/documentation/AudioUnit/Reference/AUComponentServicesReference/

Funny enough, Apple's OS X documentation contains a copy-paste mistake:
"An audio unit that interfaces to the audio inputs and outputs of iPhone OS devices ..."

I'll keep that in mind.
Comment 6 Sebastian Dröge (slomo) 2015-02-24 08:57:45 UTC
VoiceProcessing is 10.7 or later, so should be ok to compile in unconditionally.
Comment 7 Ilya Konstantinov 2015-02-24 10:46:28 UTC
(In reply to Sebastian Dröge (slomo) from comment #6)
> VoiceProcessing is 10.7 or later, so should be ok to compile in
> unconditionally.

The trouble is making gstosxcoreaudioremoteio.c build on OS X at all. Right now it's really "the iOS codepath".
Comment 8 Arun Raghavan 2015-02-24 11:08:36 UTC
Does it not work if in both cases we choose the VoiceProcessingIO subtype at device open time?
Comment 9 Matthew Waters (ystreet00) 2018-05-07 10:42:25 UTC
Are you still planning to work on this Ilya?
Comment 10 GStreamer system administrator 2018-11-03 14:53:28 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/123.