Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for iOS #881

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open

Add support for iOS #881

wants to merge 6 commits into from

Conversation

fwcd
Copy link

@fwcd fwcd commented Jan 29, 2024

Fixes #156, fixes #749

This is a rebased version of Hans Petter Selasky's patch (originally taken from here), adding support for Core Audio on iOS. I have had success using this in an experimental iOS port of the DJ app Mixxx (see here).

Building for iOS

Building for iOS is as simple as setting the CMAKE_SYSTEM_NAME (on a Mac):

cmake -B build -DCMAKE_SYSTEM_NAME=iOS
cmake --build

Notes on Usage

As suggested by Alexander Carôt, setting up the shared AVAudioSession is generally recommended for every iOS project using this, for example like this (error handling omitted for brevity):

AVAudioSession* session = AVAudioSession.sharedInstance;
AVAudioSessionCategory category = AVAudioSessionCategoryPlayback;
AVAudioSessionMode mode = AVAudioSessionModeDefault;
AVAudioSessionCategoryOptions options = AVAudioSessionCategoryOptionMixWithOthers;

NSError* error = nil;
[session setCategory:category mode:mode options:options error:&error];
[session setActive:true error:&error];

iOS does set up a default audio session, the defaults are, however, rarely the desired configuration for a media app. For example, sound by default only plays when the phone is not on silence. Setting up the AVAudioSession as described above solved that.

While it would probably be possible to include it in PortAudio's initialization procedure, this is (IMO) better handled by the library consumer, since it involves making decisions about whether e.g. the app should mute others etc., something different apps will likely want to handle differently.

Thoughts welcome.

hselasky and others added 2 commits January 29, 2024 01:44
@fwcd
Copy link
Author

fwcd commented Feb 9, 2024

Did my best to try fixing all of the formatting issues (manually, the patch contained a strange mix of 3-space, 4-space and tab indentation and the VSCode formatter would have preferred a different style to what the rest of the project uses).

The pa_whitelint.py check now passes.

@RossBencina
Copy link
Collaborator

iOS does set up a default audio session

In fact, the audio session is a singleton, so it's not really a question of whether to set one up, but whether we're happy with the defaults. The docs say "Although the default audio session provides useful behavior, it generally doesn’t provide the audio behavior a media app needs. To change the default behavior, you configure your app’s audio session category." [1]

I notice that you mention the playback category, but if full-duplex is required that will not work.[2]

Let's stew on this.

I'd like to get some other people to confirm this PR works for them before merging.

[1] https://developer.apple.com/documentation/avfaudio/avaudiosession
[2] https://developer.apple.com/documentation/avfaudio/avaudiosession/category

@philburk
Copy link
Collaborator

philburk commented Feb 9, 2024

@fwcd - thanks for doing this! it will be great to have iOS support.

This is a rebased version of Hans Petter Selasky's patch (originally taken from here),

Did your first commit have changes beyond just those needed for rebasing?

CMakeLists.txt Show resolved Hide resolved
@fwcd
Copy link
Author

fwcd commented Feb 9, 2024

In fact, the audio session is a singleton, so it's not really a question of whether to set one up, but whether we're happy with the defaults.

Yeah, I might have phrased it badly, I meant setting up the singleton (as in the code snippet). The playback category is just an example there. The defaults behaving weirdly, in the sense that speaker output wouldn't work, was something that I didn't (and still don't) really understand. So I would appreciate if someone could comment on that.

Another, more minor, issue I see with configuring the audio session from PortAudio is that it will introduce a new dependency on a much higher-level framework (AVFAudio) compared to Core Audio, which PortAudio currently uses. I am not familiar enough with iOS' audio stack to comment on whether there is a way to express this solely in terms of Core Audio's API, but that is probably something we would not want to do anyway, given that using the AVAudioSession API is generally the recommended approach for iOS apps.

Did your first commit have changes beyond just those needed for rebasing?

No, the first commit is 1-to-1 the HPS patch.

@RossBencina
Copy link
Collaborator

The defaults behaving weirdly, in the sense that speaker output wouldn't work, was something that I didn't (and still don't) really understand.

The docs say that the default won't output audio if the phone is in silent mode, could that be the issue?

@fwcd
Copy link
Author

fwcd commented Feb 9, 2024

Ah, looks like I missed that part, that would definitely explain it :D

But yeah, I guess that just underlines that it's rarely the right behavior for media apps.

@RossBencina
Copy link
Collaborator

that would definitely explain it :D

Could you revise and simplify the PR description to reflect the updated state of your understanding please?

@fwcd
Copy link
Author

fwcd commented Feb 10, 2024

Hm, the CI error is strange, I can't reproduce it locally, neither with Makefiles, nor with Ninja, nor with vcpkg:

 CMake Error at /usr/local/Cellar/cmake/3.28.0/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
  Could NOT find Threads (missing: Threads_FOUND)

I found this upstream issue, however: https://gitlab.kitware.com/cmake/cmake/-/issues/18993

*hostApi = &auhalHostApi->inheritedHostApiRep;

(*hostApi)->info.structVersion = 1;
(*hostApi)->info.type = paCoreAudio;
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should iOS be using paInDevelopment and later get its own type id or is it okay to use the same ID as the macOS Core Audio implementation, given that the two should never be available at the same time?

@RossBencina
Copy link
Collaborator

Failing iOS clang CI. Please you could you rebase and check. Possibly could be fixed with the fix we did on master: 7de41c1

@jona-1993
Copy link

jona-1993 commented Jul 29, 2024

Hello,

I used your solution for .NET 8 iOS (with dllimport).

But I think you should add this "set(CMAKE_OSX_DEPLOYMENT_TARGET "14.5")" in CMakeLists.txt. Because without this we need the latest version of iOS.

And I have an issue for build that with my iOS application:

1>Xamarin.Shared.Sdk.targets(1560,3): Error : clang++ exited with code 1:
Undefined symbols for architecture arm64:
"_PaPthreadUtil_GetTime", referenced from:
_PaUnixThread_New in portaudio.a(pa_unix_util.c.o)
"_PaPthreadUtil_NegotiateCondAttrClock", referenced from:
_PaUnixThread_New in portaudio.a(pa_unix_util.c.o)
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

I think that pa_pthread_util is not built.

EDIT:

I found the solution. I added to add_library(... :
src/os/unix/pa_pthread_util.c
src/os/unix/pa_pthread_util.h

And it works fine. ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Iphone ios SDK Deprecation PortAudio implementation for iOS
5 participants