What your parser parses is a sequence of tokens. If you don't pass the parser all the tokens that the grammar expects, then the parse can never succeed.
For instance, the problem with this lexer:
(lexer
["select" lexeme]
[whitespace (token lexeme #:skip? #t)]
[any-char (next-token)]))
Is that it only produces one token, "select". Since your parser uses more than just the token "select":
#lang brag
select : /"select" fields /"from" source joins* filters*
fields : @field (/"," @field)*
field : WORD
source : WORD
joins : join*
join : "join" source "on" "(" condition ")"
filters : "where" condition ("and" | "or" condition)*
condition : field | INTEGER "=" field | INTEGER
The parse can never succeed.
Likewise, your revised lexer:
(lexer
[whitespace (token lexeme #:skip? #t)]
["select" lexeme]
[(:seq alphabetic) (token 'WORD lexeme)]))
Will only emit two kinds of tokens: "select" and a WORD token containing a single letter as its lexeme. (Do you see why?) Also not what you want.
I can't write your whole tokenizer. But as a start, you probably want to match each of your reserved keywords as a whole token, e.g. —
[(:or "select" "from" "join" "on" "where" "and") lexeme]
If you want other sequences of characters to be captured as WORD tokens, your pattern needs to be a quantified pattern:
[(:+ alphabetic) (token 'WORD lexeme)]