I use Tabnine completion in JetBrains CLion when I write Rust code. I’ve found it to be quite useful. Haven’t tried GitHub Copilot yet, so don’t know how Copilot compares to it.
I mean, that's literally what happened. At least initially Tabnine was based on the GPT2 model trained on code. Then GitHub launched Copilot using the OpenAI Codex model which is based on GPT3. I guess this is why several people have commented on the marked improvement when adopting GitHub Copilot.
I have no idea how Tabnine builds their models today, and how they perform compared to Copilot. I guess one advantage could be lower latency in suggestions that they claim come from training smaller more specific models. But the way I find Copilot working for me is that it thinks about the same time I need to think and then it makes a suggestion for a good chunk of code. If my thinking and Copilot's thinking match up then I can save myself a good bit of typing.