When a memory block is used for host-visible memory, its mapped pointer
is tracked with the block. If that memory is freed and later re-used
for some non-mappable memory, the pointer never gets cleared, and so
code thinks the memory is mappable and tries to use the pointer.
Although the name is unfortunate, this allows access to lovr.headset
when no window is opened or when the graphics module is disabled. This
requires the XR_MND_headless extension to be supported by the runtime.
Only when a readback is read back before a pass is created.
Should really change gpu to know if the frame has started yet and adjust
the tick index accordingly.
- Check for layers before enabling
- Check for instance/device extensions before enabling
Fixes unfriendly errors when running on a system without validation layers
installed.
Uses same table approach as OpenXR code.
Some Android header defines DEPTH, which clashes with a symbol in the
OpenXR driver. This change just stops using Android headers in there
and declares more granular private functions. It also removes a few
unused private os functions.
- Allow parent CMake projects to expose symbols more easily
- Allow for custom plugins folder
- Include directories are always relative to lovr's source dir
Co-authored-by: Ilya Chelyadin <ilya77105@gmail.com>
ModelData:getTriangles currently adds a fresh set of vertices for every
mesh in a node. This is technically correct, but it wastes space when 2
nodes reference the same set of vertices with different index buffers,
which is pretty common when a node has multiple materials. It also
breaks ODE, who doesn't like it when vertices outnumber indices too
much.
- Add helper functions for creating shapes to avoid duplication between
newShape and newShapeCollider.
- Add lovr.physics.newMeshShape and lovr.physics.newTerrainShape
- Register TerrainShape so it has all the base Shape methods
- Smooth out a few TerrainShape warnings
Fixes easily-encounterable GPU OOM on discrete cards.
Currently when mapping CPU-accessible GPU memory, there are only two
types of memory: write and read.
The "write" allocations try to use the special 256MB pinned memory
region, with the thought that since this memory is usually for vertices,
uniforms, etc. it should be fast.
However, this memory is also used for staging buffers for buffers and
textures, which can easily exceed the 256MB (or 246MB on NV) limit upon
creating a handful of large textures.
To fix this, we're going to separate WRITE mappings into STREAM and
STAGING. STREAM will act like the old CPU_WRITE mapping type and use
the same memory type. STAGING will use plain host-visible memory and
avoid hogging the precious 256MB memory region.
STAGING also uses a different allocation strategy. Instead of creating
a big buffer with a zone for each tick, it's a more traditional linear
allocator that allocates in 4MB chunks and condemns the chunk if it ever
fills up. This is a better fit for staging buffer lifetimes since there's
usually a bunch of them at startup and then a small/sporadic amount
afterwards. The buffer doesn't need to double in size, and it doesn't
need to be kept around after the transfers are issued. The memory
really is single-use and won't roll over from frame to frame like the
other scratchpads.
There's a "portability enumeration" extension and flag you have to set
to get Vulkan to work on macOS. If you don't set it, Vulkan hides the
MoltenVK runtime since it's not 100% conformant. The flag was added
unconditionally, but it needs to only be added when the extension is
active.