The current flag did not work because float shader flags are not
supported. It was also not very useful because it was per-shader
and did not use the alpha cutoff property of glTF materials.
Instead, let's turn the shader flag into an enable/disable boolean,
and add a scalar material property named "alphacutoff" that gets
read by the glTF importer.
When the alphaCutoff flag is enabled, the material property will be
compared against the pixel's alpha value to decide whether it should
get discarded.
- Compute feature requires compute shaders, image load/store, and SSBOs.
- GLSL 330 is always used, instead of changing depending on compute shader extension.
- Explicitly enable compute shaders, image load/store, and SSBO extensions when needed.
This allows implementations that don't support GLSL 430 to run compute shaders,
and keeps the min supported GL version more consistently at GL3.3.
Headset drivers are allowed to override the vsync setting if vsync
messes up their frame timing. The vsync property is effectively a
global piece of state in core/os and doesn't change across restarts
because the window is persistent. This can mean that if you switch
from a headset driver that wants vsync off (anything except desktop)
to a headset driver that doesn't care what the vsync is (desktop),
you could end up with a vsync setting that doesn't match t.window.vsync.
I think this is a symptom of poor design somewhere and the best solution
to this probem is "to just not have it". Similar issues exist for, e.g.
the window size (but that one is less weird because at least you were
the one who changed it). For now we are just going to ensure that
lovr.graphics.createWindow always modifies the vsync property.
Untested, may need to adjust this fix later.
lovrGraphicsMapBuffer had the potential to cause a flush. Flushing
unmaps buffers. This meant that during any of the calls to map while
creating a Batch, it was possible to cause a flush and unmap other
buffers that expected to be mapped. This caused writes to unmapped
pointers and subsequent skipping of calls to glFlushMappedBufferRange.
The fix is to figure out if we need to flush upfront and get it out
of the way before mapping any buffers.
Some hardware supports ARB_compute_shader but not 4.3, causing
shader compilation failures because currently we switch to GLSL 430
if compute shaders are detected.
Instead, just detect GL 4.3 instead of looking for the compute shader
extension. This means that compute shaders will sometimes be
unavailable even when they're supported.
It would be possible to improve this by modifying the way shaders
are compiled. Maybe the highest supported GLSL version should be used,
but this makes shader authoring somewhat more difficult.
We never try to do this anyway, and the unmapping code in discard
doesn't flush contents so it's better for people to unmap the
buffer themselves before calling discard.
It appears that GL_MAP_UNSYNCHRONIZED_BIT interferes with
GL_MAP_INVALIDATE_BUFFER_BIT's ability to discard buffer
contents. Removing the unsynchronized bit fixes visual
glitches on Intel HD GPUs.