Fl_Gl_Window FL_DEPTH

13 views
Skip to first unread message

danielc...@gmail.com

unread,
Dec 17, 2025, 8:51:23 PM (2 days ago) Dec 17
to fltk.general
Hi, how to make Fl_Gl_Window use FL_DEPTH 32bit?

The default is 24bit, is there a way to set it to 32bit without using a fbo?

Thanks


Matthias Melcher

unread,
Dec 18, 2025, 2:17:16 PM (yesterday) Dec 18
to fltk.general

IIRC, mode(FL_RGB8|FL_ALPHA) will generate a 32 bt buffer. It's been a while since I worked with this.

danielc...@gmail.com

unread,
Dec 18, 2025, 2:45:47 PM (yesterday) Dec 18
to fltk.general
I have 
mode(FL_OPENGL3 | FL_RGB8|FL_ALPHA | FL_DOUBLE | FL_DEPTH | FL_ACCUM | FL_STENCIL | FL_MULTISAMPLE);

GLint depthBits = 0;
glGetIntegerv(GL_DEPTH_BITS, &depthBits);
std::cout << "Window depth buffer precision: " << depthBits << " bits" << std::endl;

says: Window depth buffer precision: 24 bits

I want 32 bits for the depth buffer, to increase precision because of z fighting, ghosting.

Using FLTK 1.5 and Opencascade 7.9.3

Thanks

Matthias Melcher

unread,
10:26 AM (9 hours ago) 10:26 AM
to fltk.general
Oh, I am sorry, I misread. You wanted a 32 bit depth buffer, not color buffer. 

I just added DEPTH32 as a mode flag for all platforms. Checks are pending right now, but the option should be available through GitHub in 20 minutes from now,

danielc...@gmail.com

unread,
4:39 PM (3 hours ago) 4:39 PM
to fltk.general
I compiled and tested with this code but says Insufficient GL support and blank window. Using linux fltk latest dev.

// g++ flgltest.cpp -o flgltest `fltk-config --cxxflags --ldflags --use-gl`

#include <FL/Fl.H>
#include <FL/Fl_Window.H>
#include <FL/Fl_Gl_Window.H>
#ifdef __APPLE__
#include <OpenGL/gl.h>
#else
#include <GL/gl.h>
#include <GL/glext.h>
#endif
#include <cstdio>

class SimpleGLWindow : public Fl_Gl_Window {
public:
SimpleGLWindow(int X, int Y, int W, int H, const char* L = 0)
: Fl_Gl_Window(X, Y, W, H, L) {
mode( FL_OPENGL3 | FL_DEPTH32); // says Insufficient GL support
// mode(FL_RGB | FL_DOUBLE | FL_DEPTH); // works says Window depth buffer precision: 24 bits
}

private:
void draw() override {
if (!valid()) {
make_current(); // ensure context is current

#ifndef __APPLE__
GLint depthBits = 0;
glGetIntegerv(GL_DEPTH_BITS, &depthBits);
printf("Window depth buffer precision: %d bits\n", depthBits);
#endif
glEnable(GL_DEPTH_TEST);
glViewport(0, 0, w(), h());
}

glClearColor(0.08f, 0.10f, 0.14f, 1.f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glBegin(GL_TRIANGLES);
glColor3f(0.9f, 0.2f, 0.2f); glVertex2f(-0.6f, -0.6f);
glColor3f(0.2f, 0.9f, 0.2f); glVertex2f( 0.6f, -0.6f);
glColor3f(0.2f, 0.2f, 0.9f); glVertex2f( 0.0f, 0.6f);
glEnd();
}

int handle(int event) override {
switch (event) {
case FL_FOCUS:
case FL_UNFOCUS:
case FL_ENTER:
case FL_LEAVE:
return 1;
default:
return Fl_Gl_Window::handle(event);
}
}
};

int main(int argc, char** argv) {
Fl::visual(FL_DOUBLE | FL_RGB);

Fl_Window window(640, 480, "FLTK OpenGL Demo");
SimpleGLWindow glview(10, 10, window.w() - 20, window.h() - 20);
window.resizable(glview);
window.end();

window.show(argc, argv);
return Fl::run();
}

 
Thanks

Matthias Melcher

unread,
5:10 PM (2 hours ago) 5:10 PM
to fltk.general
I don't know. Maybe your card or driver does not support 32 bit depth buffers? Is this Wayland or X11?

danielc...@gmail.com

unread,
5:22 PM (2 hours ago) 5:22 PM
to fltk.general
Its X11. I tested and research, most drivers dont support directly, only by a fbo. It will be nice if FLTK support it by creating and manage by fbo
, but I think its much to ask.
Anyway here is the sample to use 32bit if avaiable and fallback to 24bit if not:

// g++ flgltest.cpp -o flgltest `fltk-config --cxxflags --ldflags --use-gl`

#include <FL/Fl.H>
#include <FL/Fl_Window.H>
#include <FL/Fl_Gl_Window.H>
#ifdef __APPLE__
#include <OpenGL/gl.h>
#else
#include <GL/gl.h>
#include <GL/glext.h>
#endif
#include <cstdio>

class SimpleGLWindow : public Fl_Gl_Window {
public:
SimpleGLWindow(int X, int Y, int W, int H, const char* L = 0)
: Fl_Gl_Window(X, Y, W, H, L)
{
// Try to request 32-bit depth buffer
mode(FL_RGB | FL_DOUBLE | FL_DEPTH32);
if (!can_do(mode())) {
fprintf(stderr, "FL_DEPTH32 not supported, falling back to FL_DEPTH\n");
mode(FL_RGB | FL_DOUBLE | FL_DEPTH);
}
}

private:
void draw() override {
if (!valid()) {
make_current();

#ifndef __APPLE__
GLint depthBits = 0;
glGetIntegerv(GL_DEPTH_BITS, &depthBits);
printf("Window depth buffer precision: %d bits\n", depthBits);
#endif
glEnable(GL_DEPTH_TEST);
glViewport(0, 0, w(), h());
}

glClearColor(0.08f, 0.10f, 0.14f, 1.f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glBegin(GL_TRIANGLES);
glColor3f(0.9f, 0.2f, 0.2f); glVertex2f(-0.6f, -0.6f);
glColor3f(0.2f, 0.9f, 0.2f); glVertex2f( 0.6f, -0.6f);
glColor3f(0.2f, 0.2f, 0.9f); glVertex2f( 0.0f, 0.6f);
glEnd();
}
};

int main(int argc, char** argv) {
Fl::visual(FL_DOUBLE | FL_RGB);

Fl_Window window(640, 480, "FLTK OpenGL Depth Precision Demo");
SimpleGLWindow glview(10, 10, window.w() - 20, window.h() - 20);
window.resizable(glview);
window.end();

window.show(argc, argv);
return Fl::run();
}
Thanks

Matthias Melcher

unread,
5:32 PM (2 hours ago) 5:32 PM
to fltk.general
Ok, reading up on X11/GLX it seems that indeed many drivers do not support 32 bit depth buffers, even though the card could. I used to do a lot of OpenGL programming, but at some point I missed the boat to OpenGL3 since I no longer needed it on the job. I can't help with implementing FBOs for FLTK unfortunately. Should you do get a working solution and feel like adding the code to FLTK, I'll do my best to integrate that.
Reply all
Reply to author
Forward
0 new messages