Quest added a thing where they emulate grip pose when hand tracking is
active. This is actually pretty cool, and maybe LÖVR should do it too
on other runtimes, but it messed up the Quest hand mesh animation, for
some complicated reasons:
- Previously, getPose('hand/*') was returning the wrist pose because
LÖVR fell back to hand tracking data when the controller wasn't
tracked.
- Because of this, coupled with the fact that hand/controller models are
expected to be drawn at the hand pose, hand meshes were animated such
that the root node was located at the wrist pose.
- When Oculus added grip pose emulation for hand tracking, it caused a
discrepancy:
- Hand meshes were still being animated relative to their wrist pose
- getPose was now returning grip-style poses
- This resulted in hand meshes being off by approximately 90 degrees.
The fix is to locate skeletal joints relative to the grip pose when
animating Oculus hand meshes, and to place the origin/wrist at its real
pose instead of assuming it's the origin.
The radius is also included as the 4th number in the table,
but I think this was a mistake.
Not going to remove it yet, but maybe we can start to prefer reading it
from a string key.
Origin type used to be a query-able property of the VR system that
indicated whether the tracking was roomscale or seated-scale.
The t.headset.offset config value could be used to design an
origin-agnostic experience, which by default shifted content up 1.7
meters when tracking was seated-scale. That way, stuff rendered at
y=1.7m was always at "eye level". It worked pretty well.
It's getting replaced with a t.headset.seated flag.
- If seated is false (the default), the origin of the coordinate space
will be on the floor, enabling the y=1.7m eye level paradigm. If
tracking is not roomscale, a floor offset of 1.7m will be emulated.
- If seated is true, the origin of the coordinate space will be y=0
at eye level (where the headset was when the app started). This is
the case on both roomscale and seated-scale tracking.
So basically 'seated' is an opt-in preference for where the app wants
its vertical origin to be.
One advantage of this is that it's possible to consistently get a y=0
eye level coordinate space, which was not possible before. This makes
it easier to design simpler experiences that only need to render a
floating UI and don't want to render a full environment or deal with
offsetting everything relative to a 'floor'. This also makes it easier
to implement hybrid VR+flatscreen experiences, because the camera is at
y=0 when the headset module is disabled.
The opt-in nature of the flag, coupled with the fact that it is
consistent across all types of tracking and hardware, is hopefully a
more useful design.
You can do lovr.headset.getPose('floor') to get the offset of the stage
relative to the local origin if you want to draw something at the center
of the play area.
Also lovr.headset.isTracked('floor') basically tells you if it's roomscale.
If the very first graphics-related thing done in a frame is drawing a
model, the reanimation logic would skip because a new frame hasn't
started yet. lovrModelAnimateVertices needs to unconditionally start a
new frame. (Previously, a new frame was guaranteed to be started
because all passes were temporary, but this is no longer the case).
- rm Pass:getTallyCount. It's unclear if this reports the current tally
count, or the number of tallies in the last submit. lovr was even
getting this confused internally (fixed).
- rm tally index argument from Pass:beginTally and Pass:finishTally.
The tally index is now an autoincremented value managed internally,
and both :beginTally/:finishTally return it. If someone wants to use
their own indices, a lookup table can be used to do the mapping.
- lovr.headset.getPassthrough returns current passthrough mode
- lovr.headset.setPassthrough sets the passthrough mode
- nil --> uses the default passthrough mode for the headset
- bool --> false = opaque, true = one of the transparent modes
- string --> explicit PassthroughMode
- lovr.headset.getPassthroughModes returns a table of supported modes
Creates a lightweight copy of a Model, for situations where a single
model needs to be rendered with multiple poses in a single frame, which
is currently not possible.
Enables automatic CPU/GPU timing for all passes. Defaults to true
when graphics debugging is active, but can be enabled/disabled manually.
When active, Pass:getStats will return submitTime and gpuTime table
keys, respectively indicating CPU time the Pass took to record and the
time the Pass took to run on the GPU. These have a delay of a few
frames.
This doesn't include a way to get "global" timing info for a submit.
This information would be useful because it doesn't require lovrrs to
sum all the timing info for all the passes and it would include other
work like transfers, synchronization, and CPU waits. However, this is
more challenging to implement under the current architecture and will be
deferred to later. Even if this were added, these per-pass timings will
remain useful.
- Make sure to reset barriers for compute/canvas resources too
- Delay stream ending so OpenXR layout transitions actually go in an
active command buffer.
If you switch to/from a compute shader and the other shader is either
nil or a graphics shader, clear bindings.
Maybe if you switch to/from nil the bindings shouldn't be cleared, but
this is a bit more complicated to implement and it's not clear that
there's any reason not to treat nil shaders as graphics shaders.
Previously, if you wanted to run compute operations that depend on the
results of prior compute operations, you had to put these in 2 different
passes, because logically all of the compute calls in a pass run "at the
same time" (or we're at least giving the GPU the freedom to do that).
Having to set up an entirely new pass just to synchronize 2 :compute
calls is pretty cumbersome, and incurs extra overhead. It would be
possible to change things so *every* :compute call waits for previous
computes to finish, but this would destroy GPU parallelism.
The Pass:barrier method lets compute calls within a pass synchronize
with each other, without requiring multiple passes. Adding a barrier
basically means "hey, wait for all the :compute calls before the barrier
to finish before running future :computes".
This lets things remain highly parallel but allows them to be easily
synchronized when needed.
Pass stores draw commands rather than sending them to Vulkan
immediately.
The main motivation is to allow more flexibility in the Lua API. Passes
are now regular objects, aren't invalidated whenever submit is called,
and can cache their draws across multiple frames. Draws can also be
internally culled, sorted, and batched.
Some API methods (tallies) are missing, and there are still some bugs to
fix, notably with background color.
View count is well-defined to be 2 with the current view configuration,
and people should be able to rely on getViewCount even before the views
are tracked. It returns the number of views in the view configuration,
not the number of views with valid data.
- If timestamp is zero (before .update is called), return empty data
instead of erroring.
- Check for valid position/orientation separately, and return empty data
for anything that's invalid. Previously both position/orientation
were used if either was valid, which returns undefined results.
- Add Buffer:newReadback
- Add Buffer:getData
- Buffer:getPointer works with permanent buffers
- Buffer:setData works with permanent buffers
- Buffer:clear works with permanent buffers
- Add Texture:newReadback
- Add Texture:getPixels
- Add Texture:setPixels
- Add Texture:clear
- Add Texture:generateMipmaps
- Buffer readbacks can now return tables in addition to Blobs using Readback:getData
Tally is coming back soon with an improved API, it's temporarily removed
since it made the transfer rework a bit easier.
Note that synchronous readbacks (Buffer:getData, Texture:getPixels)
internally call lovr.graphics.submit, so they invalidate existing Pass
objects. This will be improved soon.