Trust is one of the major hurdles to the adoption and success of any AI application.
Surveys show that, while on the one hand implementing AI solution(s) is the number 1 priority of business leaders for the success of their businesses, on the other hand, 59% of the customers do not trust companies with their data. This is a major gap/puzzle that needs to be solved by companies that adopt or implement AI solutions.
Salesforce has been an early adopter of AI and has been on the AI journey since 2014. With almost a decade of AI research and leadership behind it salesforce has solved the puzzle with its Einstein GPT Trust Layer.
The Einstein GPT Trust Layer is a combination of a set of actions or processes that help in ensuring that the generative AI models employed by Salesforce are used responsibly and ethically. One of the major features of the Trust Layer is its ability to protect customer data. Following are the actions that happen once a user enters a prompt;
- Secure data retrieval: The Trust Layer guarantees that customer data is only retrieved from Salesforce’s CRM system and only when it is needed for a specific task. This prevents unauthorized access to customer data.
- Dynamic grounding: The Trust Layer allows users to provide specific context for their prompts without having to share the underlying data. This is done by using a technique called “dynamic grounding,” which allows the Trust Layer to generate a prompt that is tailored to the specific task at hand.
- Data masking: The Trust Layer can mask sensitive data in prompts so that it is not visible to the generative AI model. This helps to protect customer privacy.
- Zero retention: The Trust Layer makes sure that all prompts are deleted after they have been used. This makes sure none of the customer data is stored in the generative AI model.
- Toxicity detection: The Trust Layer can detect and remove toxic or biased content from the prompts. This helps prevent the AI model from generating harmful or offensive content that could lead to hurting the audience’s sentiments and avoid possible legal challenges for the business.
- Audit Trail: Finally the Trust Layer collects and stores metadata about the context, user prompt, and AI-generated response in a secure location which can only be accessed by authorized personnel to ensure that the AI models are being used in a safe and responsible way.
To summarize this entire process, in Patrick Stokes’s (EVP Salesforce) words about Einstein GPT Trust Layer in his Keynote speech at World Tour: London Event’
The trust Layer makes sure that customer data is never stored in the generative AI model. Instead, the Trust Layer only retrieves the data that is needed for a specific task, and then it deletes the data after the task is complete. This helps to protect customer privacy and ensure that customer data is not used in a way that is harmful or offensive.
The Einstein GPT Trust Layer is an important part of Salesforce’s commitment to responsible AI. It helps to ensure that generative AI models are used in a way where customer data is protected and privacy is respected.