This allows them to be initialized/destroyed from multiple threads in
any order. Previously, the first thread to require a module had to be
the last thread to use the module, otherwise it would be destroyed too
early.
There are still a few issues. If the main thread doesn't require a
module, it won't pick up the conf.lua settings. Also graphics isn't
handling the shader cache writing properly. And I think this breaks the
headset-graphics refcounting. But these will be fixed in future
commits.
- Archive is now an object that has a refcount
- Archives are stored in linked list instead of array
- Not exposed to Lua yet, but could be in the future
- core/zip was merged into the filesystem module
- Mountpoints are handled centrally instead of per-archive
- Zip doesn't pre-hash with the mountpoint anymore
- mtime is truly only computed on request for zips
Mountpoints don't work properly yet.
- Cubemaps can have any layer count that is a multiple of 6.
- A cubemap with more than 6 layers will be a cubemap array image view.
- This isn't perfect because it conflates regular cubemaps with
6-layer cubemap arrays.
- Enable the vk feature, handle the spv feature, add getPixel helper.
- 'sample' now implies both sample and linear filtering (practically always
true for all formats lovr supports)
- 'render' now includes 'blend' for color formats (also practically
always true except for r32f on some old mobile GPUs)
- 'blit' now includes 'blitsrc'/'blitdst' because lovr doesn't support
blitting between textures with different formats
- 'atomic' is removed because lovr doesn't really support atomic images yet
The technique used only works for AABBs. Trying to apply the model
matrix to the extent like that isn't valid. For now, switch back to
the naive approach. This is quite a bit slower, but at least it's
correct.
If there is only a single pass in the submit, barrierCount is zero
since there will be no inter-pass synchronization. This is almost
correct, but not quite, because if a Pass has compute and render work,
the render pass may need to synchronize with the compute pass. So a
barrier is still necessary. For simplicity, always allocate the full
number of barriers, even though the final render barrier will always be
empty.
Additionally, avoids passing NULL to memset when the barrier count is
zero and the barrier arrays are NULL.
This fixes issues where some fonts would have glyphs with weird windings
and they would get rendered inside-out.
Unfortunately updating msdfgen increased its size by a factor of 2-3x.
- rm :getTallyData, it's totally lame, just do a readback
- rm gpu_tally_get_data too, webgpu doesn't support it anyway
- Clamp tally copy count so it doesn't overflow buffer
- Tally buffer offset's gotta be a multiple of 4
- Return nil instead of 2 values when tally buffer isn't set
- Copy correct number of tallies (multiply by view count instead of max
view count)
- Skip occlusion queries entirely if no tally buffer was set
Restores ability to open window after initializing graphics module.
Surface is created lazily instead of being required upfront.
Use native platorm handles instead of GLFW's callbacks.
Some minor reorganization around core/gpu present API and xr transitions.
Linux links against libxcb/libX11/libX11-xcb for XGetXCBConnection.
The message box is meant to be a hack to improve UX on Windows, not an
officially supported feature of core/os. So it's more appropriate to
inline it in the one place/platform where it's used.
GLFW reports window size as zero on Windows when the desktop window is
minimized. This is by design. Using zero width/height for window
textures isn't valid. The fix is to ignore resize events where the
width or height is zero and also cache the last-valid window size so it
can be reported by os_window_get_size. Sighs...
These are called when creating/destroying Thread objects. It's
currently only implemented on Android, where it attaches/detaches the
Java VM to the thread. This allows JNI calls to be used on threads.
However I don't think e.g. `lovr.system.requestPermission` will work on
a thread yet, because it uses the global JNI env from the main thread
instead of the thread's JNI env. Still, attaching/detaching the VM is
an improvement and will allow well-behaved JNI methods to work on
threads now.
I don't know how expensive this is, yolo.
Just draw a sphere. The transform is rotated so the sphere segments
line up better, because spheres and capsules use different orientations
for their sphere parts. Also the "degenerate" z axis is reconstructed
to be perpendicular to the x/z axes. This doesn't seem like it will be
particularly fast, but hopefully people aren't drawing zero-length
capsules too often. There might be an opportunity to shortcut the
rotation since it's 90 degrees and would just involve swapping columns.
- state.features.overlay should remain a bool since it just indicates
whether the extension is supported/enabled.
- split the config value into a bool/u32 pair so the full u32 range can
be used for the order (seems important to coordinate with other apps).
- Also you can use a boolean now like before, which uses 0 as the order.
- Last row of transform matrix is unused, make it 4x3
- Requires funny row-major packing due to vec3 std140 padding.
- Teach spirv parser to tolerate non-square matrix types, though
they aren't supported anywhere else yet.
- Compute cofactor in shader for normal matrix, ALU is free,
optimize out many terms, rm maf_cofactor.
- Take out complex UBO alignment logic since stuff is PO2 these days.
This was a common bottleneck for some workloads, so there are measurable
performance gains (up to 2x faster pass submission on CPU). GPU time is
identical, at least on desktop.
LOVR doesn't require OpenXR to run. When the headset module is enabled
and the openxr headset driver is enabled, LOVR tries to initialize
OpenXR, and if it fails then it will try the next driver.
The OpenXR loader will print error messages to stderr by default. This
is undesirable because someone who is unfamiliar with OpenXR will see a
bunch of messages in their console that say "ERROR" and think something
is wrong, even though the messages are innocuous and don't indicate an
actual problem.
The only way to silence these messages from the OpenXR loader, to my
knowledge, is to set the XR_LOADER_DEBUG environment variable to 'none'.
This is only done when the environment variable isn't set, so it's still
possible to set XR_LOADER_DEBUG to see the logs.
Most OBJ loaders use OpenGL texture coordinate conventions.
After switching to Vulkan, the UV origin became upper-left and images no
longer needed to be flipped on import. This means that the OBJ importer
now needs to flip its UVs to compensate. Somehow, no one noticed until
now! Most people are using glTF I guess.
Recent SteamVR versions have bugs with it, especially after triggering a
recenter operation.
In SteamVR, recentering fires referenceSpaceChangePending for the LOCAL
space, then the STAGE space, then the LOCAL space again, all with
different changeTimes. No poseInPreviousSpace is given.
Recreating the main reference space whenever this event is received
leads to strange, inconsistent issues. Sometimes the local/stage spaces
end up on top of each other, other times one or both will be way up in
the air (putting the headset at negative y coordinates).
This bug is even present when recentering in the compositor, so it's not
an issue with lovr. Cautiously disabling the local-floor emulation on
SteamVR runtimes and just always using the STAGE space until things are
sorted out.
The "vec3 is 4 floats" thing was consistently confusing to people. It's
reverted everywhere except for Curve.
maf now has full sets of methods for vec2/vec3/vec4, for consistency.
Vector bindings now use luax_readvec* helper functions for the
number/vector variants, and use maf for most functionality, which cleans
things up a lot.
Some compile fixes and a rename from gpu_wgpu to gpu_web, since wgpu
refers to a specific implementation of WebGPU and I'm really bad at
typing it for some reason.
- Adds Pass:setViewCull to enable/disable frustum culling.
- Renames Pass:setCullMode to Pass:setFaceCull (with backcompat).
Some stuff currently missing:
- Text is not culled, but should be.
- VR view frusta are not merged yet.
It's important that the bits for the vector type occupy the least
significant bits, so that vectors can be distinguished from pointer
lightuserdata.
When the vector pool was expanded, this broke, causing e.g. Blob
pointers to exhibit undefined behavior when trying to use them as
vectors.
tbh I still don't understand the union/bitfield memory layout.
Quest added a thing where they emulate grip pose when hand tracking is
active. This is actually pretty cool, and maybe LÖVR should do it too
on other runtimes, but it messed up the Quest hand mesh animation, for
some complicated reasons:
- Previously, getPose('hand/*') was returning the wrist pose because
LÖVR fell back to hand tracking data when the controller wasn't
tracked.
- Because of this, coupled with the fact that hand/controller models are
expected to be drawn at the hand pose, hand meshes were animated such
that the root node was located at the wrist pose.
- When Oculus added grip pose emulation for hand tracking, it caused a
discrepancy:
- Hand meshes were still being animated relative to their wrist pose
- getPose was now returning grip-style poses
- This resulted in hand meshes being off by approximately 90 degrees.
The fix is to locate skeletal joints relative to the grip pose when
animating Oculus hand meshes, and to place the origin/wrist at its real
pose instead of assuming it's the origin.
Origin type used to be a query-able property of the VR system that
indicated whether the tracking was roomscale or seated-scale.
The t.headset.offset config value could be used to design an
origin-agnostic experience, which by default shifted content up 1.7
meters when tracking was seated-scale. That way, stuff rendered at
y=1.7m was always at "eye level". It worked pretty well.
It's getting replaced with a t.headset.seated flag.
- If seated is false (the default), the origin of the coordinate space
will be on the floor, enabling the y=1.7m eye level paradigm. If
tracking is not roomscale, a floor offset of 1.7m will be emulated.
- If seated is true, the origin of the coordinate space will be y=0
at eye level (where the headset was when the app started). This is
the case on both roomscale and seated-scale tracking.
So basically 'seated' is an opt-in preference for where the app wants
its vertical origin to be.
One advantage of this is that it's possible to consistently get a y=0
eye level coordinate space, which was not possible before. This makes
it easier to design simpler experiences that only need to render a
floating UI and don't want to render a full environment or deal with
offsetting everything relative to a 'floor'. This also makes it easier
to implement hybrid VR+flatscreen experiences, because the camera is at
y=0 when the headset module is disabled.
The opt-in nature of the flag, coupled with the fact that it is
consistent across all types of tracking and hardware, is hopefully a
more useful design.
You can do lovr.headset.getPose('floor') to get the offset of the stage
relative to the local origin if you want to draw something at the center
of the play area.
Also lovr.headset.isTracked('floor') basically tells you if it's roomscale.
If the very first graphics-related thing done in a frame is drawing a
model, the reanimation logic would skip because a new frame hasn't
started yet. lovrModelAnimateVertices needs to unconditionally start a
new frame. (Previously, a new frame was guaranteed to be started
because all passes were temporary, but this is no longer the case).