- Backported the OCULUSGO device type enumerant. Need to test to
determine if the Oculus Go still reports this device type or if
it just reports unknown.
- A more involved fix will be to use JNI to discover the build model
from the Android settings.
- Make the renderloop synchronous by hijacking the RAF to run on the
XRSession when active.
- Convert os_web to use emscripten's native HTML5 interface instead
of going through GLFW.
- Stop using preinitialized GL context -- lovrPlatformCreateWindow
now creates the context.
- GLES2/3 emulation is not necessary.
- Remove inline sessions. The VR simulator is used to render to the
Canvas instead. webxr_attach and webxr_detach are used to replace
replace the active headset driver with the webxr driver when an
immersive session starts.
- Add noop desktop_getSkeleton.
- lovr.headset.newModel accepts an optional options table as the
second argument. There is currently a single option named
'animated' that can be used to request an animatable model.
Currently it isn't clear if this should be a hint or not.
- lovr.headset.animate (name pending) can be called with a device
and a model (usually with an animated model from headset.newModel,
but this is not required). The function attempts to animate the
Model to match the pose of the device in an opaque driver-specific
way, and returns whether or not this was successful.
- OpenVR has models for controllers with a system called "components"
that can be used to animate the individual buttons. Now the OpenVR
headset driver implements the 'animate' function to make use of the
controller components, to easily load and render animated controllers.
* lovrPlatformGetBundlePath was missing the root argument
* ANDROID_SDK can't be assumed to be the parent of the ndk folder, in case it's a side-by-side installation of the NDK. Instead, ANDROID_SDK should be provided with -D
* One more thing we could mention in the docs that I ran into: Installing java with apt gave me an incompatible version. It worked better to just -DJAVA_HOME= to the java that comes with Android studio (/snap/android-studio/91/android-studio/jre on ubuntu).
Add entrypoints, headset backend code, fill in the Activity, and
add various special cases to account for the asynchronous render loop,
lack of sRGB support, and OpenGL state resets.
There are 4 new devices: beacon/1 through beacon/4. They represent
tracking reference like StemaVR base stations or Oculus cameras.
There are 4 because that's how many base stations you can have in
a single tracking setup.
Right now only OpenVR exposes poses for them.
as mentioned on slack.
there are some situations you can get into (high load in some place or other) where the newer frame submission api will behave much more consistently, and I've noticed no negative effects.
besides, the other one is deprecated as best i can tell.
Nodes can have either a transform matrix, or decomposed transform
properties, but never both. Using a union means we can store both
of those variants in the same piece of memory, using the existing
matrix boolean to figure out which one to use.
This reduces the size of the struct by 48 bytes (152 -> 104), which
ends up speeding up some model operations, I'm guessing due to the
CPU cache.
Currently nobody returns data for them, though headset drivers could
start to provide poses estimated from the head pose and IPD info.
This also makes it easier to integrate eye tracking later.
This is a change that shifts the responsibility regarding the creation
of OpenGL framebuffers from vrapi-provided swapchain texture handles.
Previously, the LovrApp component of lovr-oculus-mobile was creating
framebuffers and passing native framebuffer IDs to lovr. With this
change, lovr-oculus-mobile passes vrapi's swapchain textures to lovr
unmodified. This allows lovr to create canvases using its conventional
method and also means that the properties of the canvases are no longer
hardcoded, so things like resolution and multisampling can be
customized.
There were also some issues with multiview canvases in LÖVR due to some
misconceptions about how multisampled multiview rendering works. These
issues have also been fixed in this commit.
- One toplevel Tupfile that makes it more clear what happens.
- Add config flags for -Werror, -fsanitize, and separate debug/optimize flags.
- Automatically integrate with libs built by CMake (build folder, rpath, libs folder).
- Disabling modules actually works, only the stuff that's needed is built.
Currently:
- load/update/run/etc. take place on the boot.lua coroutine.
- draw happens "asynchronously" on the main thread.
When C needs to throw an error, it doesn't know which thread to
throw the error on. If it throws it on the wrong thread, you get
a crash instead of an error screen.
One way to fix this is to change the error context based on the
thread that's currently running, so that errors in C are thrown
on the correct thread. This is the approach that's taken here.
A potentially better approach would be to run all the code on the
same thread, but I ran into issues when I tried to do this.
It may also be possible to (ab)use the Lua panic handler to catch
errors on one of the threads and somehow forward them to the other.
This means Lua print() statements can be uniquely filtered out vs anything else (because internal Lovr logging uses loglevel DEBUG and lovr errors use loglevel WARN).
Because of how and when draws occur in our Oculus Mobile path, during a restart it would attempt to draw a frame after lovrGraphicsDestroy() is called, leading to a crash in lovrGraphicsSetCamera(). This blocks draws until the restart is finished and renderTo() has been called (conveniently detectable using the existing state.renderCallback).
Returns the predicted display time, which is the estimated time at which
the photons of the next frame will hit the eyeballs of a person in the HMD.
This should be used instead of lovr.timer.getTime when used for rendering
something that is time-dependent. Updating simulations, logic, or access
to high frequency times should still use lovr.timer.getTime.
It's still a rough draft and likely only works on my machine, but can be
improved over time.
Rough explanation:
- tup.config contains high-level build configuration defaults.
- Tuprules.tup contains mostly compiler flags (generated from the
tup.config) and declares some macros used to compile code.
- Tupfile takes all generated object files and links them into the
lovr executable.
- src/Tupdefault defines the default build steps for src and all
subdirectories, which is to compile all .c files to .o files and put
them in the <objects> bucket for linking by the toplevel Tupfile.
It's possible to have multiple configs active at once for different
platforms, projects, etc. To do this, create a folder for each build
variant you want, and place a tup.config in each folder (it can be a
symlink, which is helpful). Then, invoking `tup` will build all your
variants, or you can build a specific one by doing `tup <foldername>`.
- Ref struct only stores refcount now and is more general.
- Proxy stores a hash of its type name instead of an enum.
- Variants store additional information instead of using a vtable.
- Remove the concept of superclasses from the API.
- Clean up some miscellaneous includes.
If we expose both unhanded hands and handed hands, people need to
deal with handling (haha) both cases in their apps. It's simpler
to always deal with left and right hands, even though it is a bit
less general. Still, this is congruent with the current state of
OpenVR and OpenXR, and I think there are still open questions about
the more uncommon cases where there are more than two hands.