Only when a readback is read back before a pass is created.
Should really change gpu to know if the frame has started yet and adjust
the tick index accordingly.
- Check for layers before enabling
- Check for instance/device extensions before enabling
Fixes unfriendly errors when running on a system without validation layers
installed.
Uses same table approach as OpenXR code.
Some Android header defines DEPTH, which clashes with a symbol in the
OpenXR driver. This change just stops using Android headers in there
and declares more granular private functions. It also removes a few
unused private os functions.
- Allow parent CMake projects to expose symbols more easily
- Allow for custom plugins folder
- Include directories are always relative to lovr's source dir
Co-authored-by: Ilya Chelyadin <ilya77105@gmail.com>
ModelData:getTriangles currently adds a fresh set of vertices for every
mesh in a node. This is technically correct, but it wastes space when 2
nodes reference the same set of vertices with different index buffers,
which is pretty common when a node has multiple materials. It also
breaks ODE, who doesn't like it when vertices outnumber indices too
much.
- Add helper functions for creating shapes to avoid duplication between
newShape and newShapeCollider.
- Add lovr.physics.newMeshShape and lovr.physics.newTerrainShape
- Register TerrainShape so it has all the base Shape methods
- Smooth out a few TerrainShape warnings
Fixes easily-encounterable GPU OOM on discrete cards.
Currently when mapping CPU-accessible GPU memory, there are only two
types of memory: write and read.
The "write" allocations try to use the special 256MB pinned memory
region, with the thought that since this memory is usually for vertices,
uniforms, etc. it should be fast.
However, this memory is also used for staging buffers for buffers and
textures, which can easily exceed the 256MB (or 246MB on NV) limit upon
creating a handful of large textures.
To fix this, we're going to separate WRITE mappings into STREAM and
STAGING. STREAM will act like the old CPU_WRITE mapping type and use
the same memory type. STAGING will use plain host-visible memory and
avoid hogging the precious 256MB memory region.
STAGING also uses a different allocation strategy. Instead of creating
a big buffer with a zone for each tick, it's a more traditional linear
allocator that allocates in 4MB chunks and condemns the chunk if it ever
fills up. This is a better fit for staging buffer lifetimes since there's
usually a bunch of them at startup and then a small/sporadic amount
afterwards. The buffer doesn't need to double in size, and it doesn't
need to be kept around after the transfers are issued. The memory
really is single-use and won't roll over from frame to frame like the
other scratchpads.
There's a "portability enumeration" extension and flag you have to set
to get Vulkan to work on macOS. If you don't set it, Vulkan hides the
MoltenVK runtime since it's not 100% conformant. The flag was added
unconditionally, but it needs to only be added when the extension is
active.
There used to be oculus and pico but pico doesn't work anymore.
Eventually things will converge on the standard loader and we won't
need different loaders, but manifests may require flavors.
Fixes easily-encounterable GPU OOM on discrete cards.
Currently when mapping CPU-accessible GPU memory, there are only two
types of memory: write and read.
The "write" allocations try to use the special 256MB pinned memory
region, with the thought that since this memory is usually for vertices,
uniforms, etc. it should be fast.
However, this memory is also used for staging buffers for buffers and
textures, which can easily exceed the 256MB (or 246MB on NV) limit upon
creating a handful of large textures.
To fix this, we're going to separate WRITE mappings into STREAM and
STAGING. STREAM will act like the old CPU_WRITE mapping type and use
the same memory type. STAGING will use plain host-visible memory and
avoid hogging the precious 256MB memory region.
STAGING also uses a different allocation strategy. Instead of creating
a big buffer with a zone for each tick, it's a more traditional linear
allocator that allocates in 4MB chunks and condemns the chunk if it ever
fills up. This is a better fit for staging buffer lifetimes since there's
usually a bunch of them at startup and then a small/sporadic amount
afterwards. The buffer doesn't need to double in size, and it doesn't
need to be kept around after the transfers are issued. The memory
really is single-use and won't roll over from frame to frame like the
other scratchpads.
- When a memory block was freed, any allocators that were using it need
to null out their memory blocks. Otherwise it would just keep using
the freed block.
- The emergency morgue expunge doesn't work. It would be nice if it
did, but if the emergency expunge deletes an object that exists in a
command buffer that's still being recorded, it causes all kinds of
problems and corrupts the command buffer. You'd either need to submit
these command buffers early before deleting the object (this is super
tricky) or just prevent this entirely, maybe by growing the morgue
infinitely or throwing an error if it fills up. Either way, creating
and releasing textures in a loop without submitting work will
eventually throw an 'out of memory' error. None of this is
satisfactory and I'm not sure how to solve this well yet.
- To compromise, increase the size of the morgue a bit so emergency
flushes happen slightly less often.