- If no converter is needed, don't create/use it
- If no spatialization is needed, don't copy
In the best case, samples willi now be read into a buffer and immediately mixed into the output.
This is a large patch which adds a new Oculus Audio spatializer. Oculus Audio is slightly different from the dummy spatializer in a few ways:
- It *must* receive fixed-size input buffers, every time, always.
- It can only handle a fixed number of spatialized sound sources at a time.
- It has a concept of "tails"; the spatialization of a sound can continue after the sound itself ends (eg echo).
Changes to audio.c were needed to support Oculus Audio's quirks:
- audio.c now supports a "fixedBuffer" mode which invokes the generator/spatializer in fixed size chunks
- Each source now has an intptr_t "memo" field that the spatializer may use to store whatever (Oculus spatializer uses this to handle the sound source limit).
- The spatializer interface got a couple new methods: A "tail" method which returns a sound buffer after all sources are processed; and "create" and "destroy" methods that are called when a sound source is created or destroyed (Oculus spatializer uses this to populate/clear the "memo" field).
Along the way some other miscellaneous changes got made:
- lovr.audio.getSpatializerName() returns the current spatializer
- Spatializer init now takes in "config in" and "config out" structs (Spatializer changes fields in config out to request things, currently fixed buffer mode).
- lovr.conf now takes t.audio.spatializer (string name of desired spatializer) and t.audio.spatializerMaxSourcesHint (Spatializers with max sources limits like Oculus will use this as the limit).
- audio.c went back to tracking position/orientation as vectors rather than a matrix
- A file oculus_spatializer_math_shim.h was added containing a minimal copypaste of OVR_CAPI.h from Oculus SDK to support a ovrPoseStatef the spatializer API needs. This may have license consequences but we are probably OK via a combination of fair use and the fact that a user cannot use this header file without accepting Oculus's license through other means.
Some work remains to be done, in particular there is an entire reverb feature I did not touch and LOVR_USE_OCULUS_AUDIO cannot be activated from tup. Oculus Spatializer works better when it has velocity and time information but this patch does not supply it.
* Stop also uninitializes
* Reset doesn't exist. Just stop and start instead.
* lovrAudioInit no longer takes config, and config is now private.
Call lovrAudioStart if you want to start.
* ma_device_{un}init and start/stop are only called from one place each,
reducing the risk of dangling state
* Takes device type, so you only get either playback or capture devices
* Doesn't store devices in state, reducing risk of dangling pointers
* Uses names instead of identifiers, since miniaudio identifiers become
invalid if you call "getDevices" again
* Better diagnostics
* Split up lovrAudioInitDevice to be per-type, cleaner that way
* UseDevice now takes type and name, instead of just identifier
aka a9541579f38a0c1bab4bba294f3602fa0b80f127, plus cherry-pick of
2dc604ecde0f02280690c72f943bfb8bf52dd820.
There is a crasher in 0.10.13 and newer on Oculus Quest
(See https://github.com/mackron/miniaudio/issues/247)
By looking for failed start and requesting then;
and then emitting a new event type when
permission has been granted or rejected;
and then using that event in the default
boot.lua to re-start capture.
- The plugins folder can contain native plugins.
- CMake will build plugins with CMakeLists in them
- They can check the LOVR variable to see if they are being built inside LOVR.
- They can set the LOVR_PLUGIN_TARGETS variable to a list of targets they build.
- If blank, all non-imported targets added in the folder will be used.
- The libraries built by their targets will be moved next to the executable or into the apk.
- The library loader now tries to load libraries next to the executable or in the APK.
- It is "fixed function" now, this may be improved in the future.
- The lovr.filesystem C require path has been removed.
- enet and cjson have been removed. Use plugins.
stb_image's vertical flip flag was not thread safe in the version
of stb_image we were using. We patched stb_image to use a thread
local variable for the flag. stb_image has since been upgraded to
expose a thread local version of the flag, so our patch is no longer
necessary after upgrading.
The CMake flag to enable the thread local patch did not make very much
sense because thread local stuff is unconditionally used elsewhere.
Headset drivers are allowed to override the vsync setting if vsync
messes up their frame timing. The vsync property is effectively a
global piece of state in core/os and doesn't change across restarts
because the window is persistent. This can mean that if you switch
from a headset driver that wants vsync off (anything except desktop)
to a headset driver that doesn't care what the vsync is (desktop),
you could end up with a vsync setting that doesn't match t.window.vsync.
I think this is a symptom of poor design somewhere and the best solution
to this probem is "to just not have it". Similar issues exist for, e.g.
the window size (but that one is less weird because at least you were
the one who changed it). For now we are just going to ensure that
lovr.graphics.createWindow always modifies the vsync property.
Untested, may need to adjust this fix later.
lovrGraphicsMapBuffer had the potential to cause a flush. Flushing
unmaps buffers. This meant that during any of the calls to map while
creating a Batch, it was possible to cause a flush and unmap other
buffers that expected to be mapped. This caused writes to unmapped
pointers and subsequent skipping of calls to glFlushMappedBufferRange.
The fix is to figure out if we need to flush upfront and get it out
of the way before mapping any buffers.
- Backported the OCULUSGO device type enumerant. Need to test to
determine if the Oculus Go still reports this device type or if
it just reports unknown.
- A more involved fix will be to use JNI to discover the build model
from the Android settings.
Some hardware supports ARB_compute_shader but not 4.3, causing
shader compilation failures because currently we switch to GLSL 430
if compute shaders are detected.
Instead, just detect GL 4.3 instead of looking for the compute shader
extension. This means that compute shaders will sometimes be
unavailable even when they're supported.
It would be possible to improve this by modifying the way shaders
are compiled. Maybe the highest supported GLSL version should be used,
but this makes shader authoring somewhat more difficult.
We never try to do this anyway, and the unmapping code in discard
doesn't flush contents so it's better for people to unmap the
buffer themselves before calling discard.
It appears that GL_MAP_UNSYNCHRONIZED_BIT interferes with
GL_MAP_INVALIDATE_BUFFER_BIT's ability to discard buffer
contents. Removing the unsynchronized bit fixes visual
glitches on Intel HD GPUs.
- Make the renderloop synchronous by hijacking the RAF to run on the
XRSession when active.
- Convert os_web to use emscripten's native HTML5 interface instead
of going through GLFW.
- Stop using preinitialized GL context -- lovrPlatformCreateWindow
now creates the context.
- GLES2/3 emulation is not necessary.
- Remove inline sessions. The VR simulator is used to render to the
Canvas instead. webxr_attach and webxr_detach are used to replace
replace the active headset driver with the webxr driver when an
immersive session starts.
- Add noop desktop_getSkeleton.
It doesn't need to check it for RGB and compressed textures because
those are already rejected.
It may also be a good idea to zero-out the srgb flag for formats that
it doesn't apply to.
- lovr.headset.newModel accepts an optional options table as the
second argument. There is currently a single option named
'animated' that can be used to request an animatable model.
Currently it isn't clear if this should be a hint or not.
- lovr.headset.animate (name pending) can be called with a device
and a model (usually with an animated model from headset.newModel,
but this is not required). The function attempts to animate the
Model to match the pose of the device in an opaque driver-specific
way, and returns whether or not this was successful.
- OpenVR has models for controllers with a system called "components"
that can be used to animate the individual buttons. Now the OpenVR
headset driver implements the 'animate' function to make use of the
controller components, to easily load and render animated controllers.
ModelData manages a single allocation and creates pointers into
that allocation. These pointers were tightly packed, creating
alignment issues which triggered undefined behavior. Now, the
pointers are all aligned to 8 byte boundaries.
* lovrPlatformGetBundlePath was missing the root argument
* ANDROID_SDK can't be assumed to be the parent of the ndk folder, in case it's a side-by-side installation of the NDK. Instead, ANDROID_SDK should be provided with -D
* One more thing we could mention in the docs that I ran into: Installing java with apt gave me an incompatible version. It worked better to just -DJAVA_HOME= to the java that comes with Android studio (/snap/android-studio/91/android-studio/jre on ubuntu).