I started experimenting with this theory late last weekend and realized that LLMs were deployed in customer support bots in dozens (if not hundreds?) of websites. And every single one was vulnerable to the same bug. So, I gathered all of them up, and packaged them in a little python library. Then I used that library to add all these LLMs to a Matrix room.
(the bot is named 'Tom'. I've only just realized how confusing this is in this context. But I assure you I did not name it and you cannot blame me for this. )