Not that I know the details, but I have my doubts that BloombergGPT was even worth it. I think "maybe look at" is a little too gentle – if you think you need your own model, you don't.
Prompt engineering and even somewhat thoughtful engineering of a pipeline should take care of most of your use cases, with fine-tuning filling in any gaps. The only reason you'd train from scratch is if you're worried about the copyright/legal/ethical implications of the data LLMs were trained on – and if you're worried about that, I doubt you have enough data to build a model.