It stayed kind of experimental as I found although it was neat, I still found it better to write actual code to insert into the database rather than inline SQL in pipelines. But that was partly because my case was a bit more complex - there could be uses where that is not true. Here is an example (albeit kind of silly) of how it can be used:
create table person (name varchar(50), age integer);
insert into person values ('${file(input.txt).text.trim()}',50);
insert into person values ('andrew',42);
This is up against just adding direct DB calls in your bpipe script which is easily possible:
@Grab(group='org.xerial', module='sqlite-jdbc', version='3.23.1')
db = groovy.sql.Sql.newInstance('jdbc:sqlite:test.db')
create table person (name varchar(50), age integer);
run { hello }
Of course, this second version creates the database directly in the pipeline script, so it will not be submitted as a job to a cluster or get the benefit of any of bpipe's other features (command monitoring, capture of output, etc). If you find it useful and give it some testing it can certainly become an "official" feature.
Cheers,
Simon