If he’s not communicating in an explicit and clear way the AI can’t help you magically gain context. It will happily make up bullshit that sounds plausible though.
A poorly designed tool will do that, yes. An effective tool would do the same thing a person could do, except much quicker, and with greater success.
An LLM could be trained on the way a specific person communicates over time, and can be designed to complete a forensic breakdown of misspelt words e.g. reviewing the positioning of words with nearby letters in the keyboard, or identifying words that have different spellings but may be similar phonetically.
The intent isn’t for the LLM to respond for you, it’s just to interpret a message and offer suggestions on what a message means or rewrite it to be clear (while still displaying the original).
An LLM could be trained on the way a specific person communicates over time
Are there any companies doing anything similar to this? From what I’ve seen companies avoid this stuff like the plague, their LLMs are always frozen with no custom training. Training takes a lot of compute, but also has huge risks of the LLM going off the rails and saying bad things that could even get the company into trouble or get bad publicity. Also the disk space per customer, and loading times of individual models.
The only hope for your use case is that the LLM has a large enough context window to look at previous examples from your chat and use those for each request, but that isn’t the same thing as training.
My friend works for a startup that does exactly that - trains AIs on conversations and responses from a specific person (some business higher-ups) for purposes of “coaching” and “mentoring”. I don’t know how well it works.
There are plenty of people and organisations doing stuff like this, there are plenty of examples on HuggingFace, though typically it’s to get an LLM to communicate in a specific manner (e.g. this one trained on Lovecraft’s works). People drastically overestimate the amount of compute time/resources training and running an LLM takes; do you think Microsoft could force their AI on every single Windows computer if it was as challenging as you imply? Also, you do not need to start from scratch. Get a model that’s already robust and developed and fine tune it with additional training data, or for a hack job, just merge a LoRA into the base model.
The intent, by the way, isn’t for the LLM to respond for you, it’s just to interpret a message and offer suggestions on what a message means or rewrite it to be clear (while still displaying the original).
Huggingface isn’t customer-facing, it’s developer-facing. Letting customers retrain your LLM sounds like a bad idea for a company like Meta or Microsoft, it’s too risky and could make them look bad. Retraining an LLM for Lovecraft is a totally different scale than retraining an LLM for hundreds of millions of individual customers.
do you think Microsoft could force their AI on every single Windows computer if it was as challenging as you imply?
Hugging Face being developer-facing is completely irrelevant considering the question you asked was whether I was aware of any companies doing anything like this.
Your concern that companies like Meta and Microsoft are too scared to let users retrain their models is also irrelevant considering both of these companies have already released models so that anyone can retrain or checkpoint merge them i.e. Llama by Meta and Phi by Microsoft.
It’s a cloned image, not unique per computer
Microsoft’s Copilot works off a base model, yes, but just an example that LLMs aren’t as CPU intensive as made out to be. Further automated finetuning isn’t out of the realm of possibility either and I fully expect Microsoft to do this in the future.
Your concern that companies like Meta and Microsoft are too scared to let users retrain their models is also irrelevant considering both of these companies have already released models so that anyone can retrain or checkpoint merge them i.e. Llama by Meta and Phi by Microsoft.
they release them to developers, not automatically retrain them unsupervised in their actual products and put them in the faces of customers to share screenshots of the AI’s failures on social media and give it a bad name
If he’s not communicating in an explicit and clear way the AI can’t help you magically gain context. It will happily make up bullshit that sounds plausible though.
A poorly designed tool will do that, yes. An effective tool would do the same thing a person could do, except much quicker, and with greater success.
An LLM could be trained on the way a specific person communicates over time, and can be designed to complete a forensic breakdown of misspelt words e.g. reviewing the positioning of words with nearby letters in the keyboard, or identifying words that have different spellings but may be similar phonetically.
asking for clarification seems like a reasonable thing to do in a conversation.
A tool is not about to do that because it would feel weird and creepy for it to just take over the conversation.
The intent isn’t for the LLM to respond for you, it’s just to interpret a message and offer suggestions on what a message means or rewrite it to be clear (while still displaying the original).
Are there any companies doing anything similar to this? From what I’ve seen companies avoid this stuff like the plague, their LLMs are always frozen with no custom training. Training takes a lot of compute, but also has huge risks of the LLM going off the rails and saying bad things that could even get the company into trouble or get bad publicity. Also the disk space per customer, and loading times of individual models.
The only hope for your use case is that the LLM has a large enough context window to look at previous examples from your chat and use those for each request, but that isn’t the same thing as training.
My friend works for a startup that does exactly that - trains AIs on conversations and responses from a specific person (some business higher-ups) for purposes of “coaching” and “mentoring”. I don’t know how well it works.
it probably works pretty well when it’s tested and verified instead of unsupervised
and for a small pool of people instead of hundreds of millions of users
There are plenty of people and organisations doing stuff like this, there are plenty of examples on HuggingFace, though typically it’s to get an LLM to communicate in a specific manner (e.g. this one trained on Lovecraft’s works). People drastically overestimate the amount of compute time/resources training and running an LLM takes; do you think Microsoft could force their AI on every single Windows computer if it was as challenging as you imply? Also, you do not need to start from scratch. Get a model that’s already robust and developed and fine tune it with additional training data, or for a hack job, just merge a LoRA into the base model.
The intent, by the way, isn’t for the LLM to respond for you, it’s just to interpret a message and offer suggestions on what a message means or rewrite it to be clear (while still displaying the original).
Huggingface isn’t customer-facing, it’s developer-facing. Letting customers retrain your LLM sounds like a bad idea for a company like Meta or Microsoft, it’s too risky and could make them look bad. Retraining an LLM for Lovecraft is a totally different scale than retraining an LLM for hundreds of millions of individual customers.
It’s a cloned image, not unique per computer
Hugging Face being developer-facing is completely irrelevant considering the question you asked was whether I was aware of any companies doing anything like this.
Your concern that companies like Meta and Microsoft are too scared to let users retrain their models is also irrelevant considering both of these companies have already released models so that anyone can retrain or checkpoint merge them i.e. Llama by Meta and Phi by Microsoft.
Microsoft’s Copilot works off a base model, yes, but just an example that LLMs aren’t as CPU intensive as made out to be. Further automated finetuning isn’t out of the realm of possibility either and I fully expect Microsoft to do this in the future.
they release them to developers, not automatically retrain them unsupervised in their actual products and put them in the faces of customers to share screenshots of the AI’s failures on social media and give it a bad name
They release them under permissive licences so that anyone can do that.
yea someone could take the model and make their own product with their own PR and public perception
that’s very different from directly spoonfeeding it as a product to the general public consumers inside of WhatsApp or something
it’s like saying someone can mod Skyrim to put nude characters in it, that’s very different from Bethesda selling the game with nude characters