It will definitely be slower to buffer all data as line vertices... I once tried an alternative approach to render line drawings with similar performance to the textures.
The idea is to model lines parametrically in the fragment shader, by turning the rendering of lines upside down. The idea is to ask the following question for each pixel: am I inside or outside the waveform? if I am inside, the output is color, otherwise the output is transparent.
To check this it is enough to use the data sampled from the texture by testing whether the height of the pixel (texCoord.y) falls within some distance of the value stored in the texture at that point horizontally (texCoord.x). See below (also attached):
#version 400
uniform float thickness = 0.01;
uniform sampler2D tex;
in vec2 texCoord;
out vec4 fragColor;
void main()
{
vec4 texel = texture(tex, texCoord);
float value = abs(texCoord.y - texel.r) <= thickness ? 1 : 0;
fragColor = vec4(value, value, value, 1);
}
I find it quite satisfying that it takes about the same complexity to render the line version as the texture version :)
There are a few caveats to solve though, since the lines are not connected if there is not enough horizontal sampling then the waveform will start to "tear" and reveal it is made of discrete points rather than a line. This would require some interpolation strategy to fill in the gaps.
There is also some work to think about when rendering multiple channels. One approach I tried at the time was to use the vertical position of the texCoord.y to decide which sample to get from the texture. You need to use NearestNeighbor for this to work properly.
I never took this route to its logical conclusion, but given how much work I've seen done on parametric modelling of curves in shaders it looks feasible and might be really performant.