Audio currently stutters on the wasm build. It is much more severe
in Chrome than in Firefox (very rare/subtle in Firefox). miniaudio
is currently using ScriptProcessorNode, which is deprecated because
it processes audio on the main thread. There's a new API that lets
you programmatically process audio on a thread called AudioWorklet,
but it's hella complicated. miniaudio doesn't want to support this
because it's complicated and requires a separate JavaScript request
but it seems like it would be possible to work around using a Blob.
In the meantime, miniaudio bumps up the buffer size on WebAudio, so
let's just use that in hope that it helps.
- Sources without converters always read into the beginning of the
raw buffer, overwriting previous frames if the source was rewound
due to looping. This resulted in an audible click whenever the
source was rewound.
- After looping, Sources without converters would try to read too
many frames -- they would read a full buffer instead of only the
necessary number of frames.
- A list or map of effects can be provided to newSource.
- false can be used to bypass effects.
- All effects are enabled by default.
- Occlusion-y effects should only take effect when setGeometry is called
- Spatializer is responsible for ensuring this.
ODE errors, debugs and messages are redirected into LOVR's log system
by a callback mechanism for each.
The ODE submodule is updated to revision that does not crash when error
or debug occurs.
30e01f upgraded stb_image to include its 95560b commit from its #960
pull request. This made stb_image fail more aggressively on EOF
conditions when refilling huffman buffers in deflate streams. I think
it might be failing _too_ aggressively, though. We are able to pad our
input compressed buffers since the zip file format is guaranteed to have
extra data at the end (for, e.g., the end of central directory record).
This appears to be sufficient to fix compressed zip archives for the
time being. It's possible that more virtual padding needs to be added,
and it may be good to try to fix this in stb_image itself.
The falloff is the minimum distance at which inverse distance
attenuation takes place.
A non-positive value disables distance attenuation.
In the Lua API, nil can be used to disable attenuation, a boolean can be
used to enable attenuation with a default minimum distance, or a number
can be used for full control over the parameter.
Add support for importing ambisonic WAV files and 24/32 bit PCM WAV files.
The standard ambisonic format used internally in LÖVR is ACN channel ordering with SN3D normalization.
Anything else will be converted to this form.
There are a few restrictions and assumptions:
- Only 1st order ambisonics are supported. They need to have 4 channels.
- They can be in AMB format (Furse-Malham order/normalization), detected via WAVE_EXTENSIBLE GUID.
- Any other 4 channel file is assumed to be in "AmbiX" ACN/SN3D format.
- It seems that most ambisonic files in the wild that claim to be AmbiX are just 4 channel WAVs without any metadata.
- This means that non-ambisonic 4 channel WAVs could ambiguously be mistaken as ambisonic. This is incurred as a limitation of LÖVR.
- Ambisonic files can not currently be played back. SteamAudio currently has numerous bugs with this.
- Perhaps it would be possible to write an ambisonic rotator/panning decoder to use as a default implementation.
- Compute feature requires compute shaders, image load/store, and SSBOs.
- GLSL 330 is always used, instead of changing depending on compute shader extension.
- Explicitly enable compute shaders, image load/store, and SSBO extensions when needed.
This allows implementations that don't support GLSL 430 to run compute shaders,
and keeps the min supported GL version more consistently at GL3.3.
If 64 sources are playing and a new one is started, Source:play will
return false.
Instead of a linked list, a static list of 64 Sources is used.
Bit scanning intrinsics are used to efficiently iterate the list,
using a mask (still deciding on this).
- If no converter is needed, don't create/use it
- If no spatialization is needed, don't copy
In the best case, samples willi now be read into a buffer and immediately mixed into the output.
This is a large patch which adds a new Oculus Audio spatializer. Oculus Audio is slightly different from the dummy spatializer in a few ways:
- It *must* receive fixed-size input buffers, every time, always.
- It can only handle a fixed number of spatialized sound sources at a time.
- It has a concept of "tails"; the spatialization of a sound can continue after the sound itself ends (eg echo).
Changes to audio.c were needed to support Oculus Audio's quirks:
- audio.c now supports a "fixedBuffer" mode which invokes the generator/spatializer in fixed size chunks
- Each source now has an intptr_t "memo" field that the spatializer may use to store whatever (Oculus spatializer uses this to handle the sound source limit).
- The spatializer interface got a couple new methods: A "tail" method which returns a sound buffer after all sources are processed; and "create" and "destroy" methods that are called when a sound source is created or destroyed (Oculus spatializer uses this to populate/clear the "memo" field).
Along the way some other miscellaneous changes got made:
- lovr.audio.getSpatializerName() returns the current spatializer
- Spatializer init now takes in "config in" and "config out" structs (Spatializer changes fields in config out to request things, currently fixed buffer mode).
- lovr.conf now takes t.audio.spatializer (string name of desired spatializer) and t.audio.spatializerMaxSourcesHint (Spatializers with max sources limits like Oculus will use this as the limit).
- audio.c went back to tracking position/orientation as vectors rather than a matrix
- A file oculus_spatializer_math_shim.h was added containing a minimal copypaste of OVR_CAPI.h from Oculus SDK to support a ovrPoseStatef the spatializer API needs. This may have license consequences but we are probably OK via a combination of fair use and the fact that a user cannot use this header file without accepting Oculus's license through other means.
Some work remains to be done, in particular there is an entire reverb feature I did not touch and LOVR_USE_OCULUS_AUDIO cannot be activated from tup. Oculus Spatializer works better when it has velocity and time information but this patch does not supply it.
* Stop also uninitializes
* Reset doesn't exist. Just stop and start instead.
* lovrAudioInit no longer takes config, and config is now private.
Call lovrAudioStart if you want to start.
* ma_device_{un}init and start/stop are only called from one place each,
reducing the risk of dangling state
* Takes device type, so you only get either playback or capture devices
* Doesn't store devices in state, reducing risk of dangling pointers
* Uses names instead of identifiers, since miniaudio identifiers become
invalid if you call "getDevices" again
* Better diagnostics
* Split up lovrAudioInitDevice to be per-type, cleaner that way
* UseDevice now takes type and name, instead of just identifier
aka a9541579f38a0c1bab4bba294f3602fa0b80f127, plus cherry-pick of
2dc604ecde0f02280690c72f943bfb8bf52dd820.
There is a crasher in 0.10.13 and newer on Oculus Quest
(See https://github.com/mackron/miniaudio/issues/247)
By looking for failed start and requesting then;
and then emitting a new event type when
permission has been granted or rejected;
and then using that event in the default
boot.lua to re-start capture.
- The plugins folder can contain native plugins.
- CMake will build plugins with CMakeLists in them
- They can check the LOVR variable to see if they are being built inside LOVR.
- They can set the LOVR_PLUGIN_TARGETS variable to a list of targets they build.
- If blank, all non-imported targets added in the folder will be used.
- The libraries built by their targets will be moved next to the executable or into the apk.
- The library loader now tries to load libraries next to the executable or in the APK.
- It is "fixed function" now, this may be improved in the future.
- The lovr.filesystem C require path has been removed.
- enet and cjson have been removed. Use plugins.
stb_image's vertical flip flag was not thread safe in the version
of stb_image we were using. We patched stb_image to use a thread
local variable for the flag. stb_image has since been upgraded to
expose a thread local version of the flag, so our patch is no longer
necessary after upgrading.
The CMake flag to enable the thread local patch did not make very much
sense because thread local stuff is unconditionally used elsewhere.
Headset drivers are allowed to override the vsync setting if vsync
messes up their frame timing. The vsync property is effectively a
global piece of state in core/os and doesn't change across restarts
because the window is persistent. This can mean that if you switch
from a headset driver that wants vsync off (anything except desktop)
to a headset driver that doesn't care what the vsync is (desktop),
you could end up with a vsync setting that doesn't match t.window.vsync.
I think this is a symptom of poor design somewhere and the best solution
to this probem is "to just not have it". Similar issues exist for, e.g.
the window size (but that one is less weird because at least you were
the one who changed it). For now we are just going to ensure that
lovr.graphics.createWindow always modifies the vsync property.
Untested, may need to adjust this fix later.
lovrGraphicsMapBuffer had the potential to cause a flush. Flushing
unmaps buffers. This meant that during any of the calls to map while
creating a Batch, it was possible to cause a flush and unmap other
buffers that expected to be mapped. This caused writes to unmapped
pointers and subsequent skipping of calls to glFlushMappedBufferRange.
The fix is to figure out if we need to flush upfront and get it out
of the way before mapping any buffers.
- Backported the OCULUSGO device type enumerant. Need to test to
determine if the Oculus Go still reports this device type or if
it just reports unknown.
- A more involved fix will be to use JNI to discover the build model
from the Android settings.
Some hardware supports ARB_compute_shader but not 4.3, causing
shader compilation failures because currently we switch to GLSL 430
if compute shaders are detected.
Instead, just detect GL 4.3 instead of looking for the compute shader
extension. This means that compute shaders will sometimes be
unavailable even when they're supported.
It would be possible to improve this by modifying the way shaders
are compiled. Maybe the highest supported GLSL version should be used,
but this makes shader authoring somewhat more difficult.
We never try to do this anyway, and the unmapping code in discard
doesn't flush contents so it's better for people to unmap the
buffer themselves before calling discard.
It appears that GL_MAP_UNSYNCHRONIZED_BIT interferes with
GL_MAP_INVALIDATE_BUFFER_BIT's ability to discard buffer
contents. Removing the unsynchronized bit fixes visual
glitches on Intel HD GPUs.
- Make the renderloop synchronous by hijacking the RAF to run on the
XRSession when active.
- Convert os_web to use emscripten's native HTML5 interface instead
of going through GLFW.
- Stop using preinitialized GL context -- lovrPlatformCreateWindow
now creates the context.
- GLES2/3 emulation is not necessary.
- Remove inline sessions. The VR simulator is used to render to the
Canvas instead. webxr_attach and webxr_detach are used to replace
replace the active headset driver with the webxr driver when an
immersive session starts.
- Add noop desktop_getSkeleton.
It doesn't need to check it for RGB and compressed textures because
those are already rejected.
It may also be a good idea to zero-out the srgb flag for formats that
it doesn't apply to.
- lovr.headset.newModel accepts an optional options table as the
second argument. There is currently a single option named
'animated' that can be used to request an animatable model.
Currently it isn't clear if this should be a hint or not.
- lovr.headset.animate (name pending) can be called with a device
and a model (usually with an animated model from headset.newModel,
but this is not required). The function attempts to animate the
Model to match the pose of the device in an opaque driver-specific
way, and returns whether or not this was successful.
- OpenVR has models for controllers with a system called "components"
that can be used to animate the individual buttons. Now the OpenVR
headset driver implements the 'animate' function to make use of the
controller components, to easily load and render animated controllers.
ModelData manages a single allocation and creates pointers into
that allocation. These pointers were tightly packed, creating
alignment issues which triggered undefined behavior. Now, the
pointers are all aligned to 8 byte boundaries.
* lovrPlatformGetBundlePath was missing the root argument
* ANDROID_SDK can't be assumed to be the parent of the ndk folder, in case it's a side-by-side installation of the NDK. Instead, ANDROID_SDK should be provided with -D
* One more thing we could mention in the docs that I ran into: Installing java with apt gave me an incompatible version. It worked better to just -DJAVA_HOME= to the java that comes with Android studio (/snap/android-studio/91/android-studio/jre on ubuntu).
There are some attributes that don't have a location (gl_InstanceID
is being reported for some reason). Their location is -1 and this
causes a left shift of a negative value which is undefined.
The new t.graphics.debug flag controls the following:
- If enabled, a debug context is created
- If disabled, a no-error context is created
- If enabled, GL debug messages are forwarded to lovr.log
Add entrypoints, headset backend code, fill in the Activity, and
add various special cases to account for the asynchronous render loop,
lack of sRGB support, and OpenGL state resets.
Usually these are more of a platform-specific concept, and they
don't really interact with files or do any io.
There is a little bit of duplication among the *nix platforms since
they're similar, but overall this organization feels a bit better.
With the check for samples==0 being done BELOW the assert for offset+samples<soundData->samples,
setting samples to 0 and then having more samples available in the mic than present in
the created buffer would cause buffer overrun
Tightness parameter is amount of force is exerted on collider to resolve
collisions and enforce joint operation. Low values make joints loose,
high values make it tight and can cause collider to overshot the joint
target. With tightness set to 0 the joint loses its function. Going
above 1 puts even more energy into joint oscillations. Tightness
parameter is called ERP in ODE manual.
The responseTime affects the time constant of physics simulation, both
for collisions and for joint inertia. Low responseTime values make
simulation tight and fast, higher values make it sluggish. For
collisions it affects how fast penetration is resolved, with higher
values resulting in spongy objects with more surface penetration and
slower collision resolving. For joints the responseTime is similar to
inertia, with higher responseTime values resulting in slow oscillations.
The oscillation frequency is also affected by collider mass, so
responseTime can be used to tweak the joint to get desired frequency
with specific collider mass. Values higher than 1 are often desirable,
especially for very light objects. Unlike tightness, responseTime is
tweaked in orders of magnitude with useful values (depending on mass)
being between 10^-8 and 10^8.
Both parameters can be applied to World for simulation-wide usage, or
specified per-joint in case of distance and ball joints. Other joints
don't allow customizing these parameters, and will use World settings
instead..