To my mind, it achieved its original scientific objective (reproducing the A0 result); made a couple of innovations to the basic architecture (e.g. MLH); and then tried to keep up with Stockfish through optimization and tuning. But in the fight of "best self-learning engine" Vs "best engine, unlimited", the latter wins. The fields of RL and deep learning have continued to innovate; but LC0 has stuck with the basically the same architecture. Stockfish subsumed the essentials; LC0 didn't innovate (of, if they did, then they didn't communicate); so people drifted away.
I think LC0 achieved success; spawned multiple projects that took advantage of that success; but hasn't found a second act.