Frequently Asked Questions (FAQs)

GPT Assistant Early Access Participant

 

Responsible use of AI

At Responsive, we understand and value the importance of preserving your data privacy and security. As we integrate GPT and other AI-related technology into our product, we are fully committed to ensuring that your data remains protected, confidential, and that its usage aligns with responsible AI principles.

Uncompromising data security

To guarantee the safety of your data, we have implemented cutting-edge security measures. Our robust encryption methods safeguard your data both at rest and in transit, preventing unauthorized access and data breaches. With secure data storage and backup systems, you can trust that your information is well protected. Additionally, we conduct regular security audits and maintain compliance certifications, giving you peace of mind. Privacy by design

We have always designed our products with privacy as a core principle. Your data will never become visible in the public domain nor be used by OpenAI to train their model. Our data anonymization and aggregation techniques ensure that your information remains confidential while still providing valuable insights.

Commitment to responsible use

We firmly believe in harnessing the power of AI ethically and responsibly. Our integration of GPT technology is intended to be assistive, enhancing user experiences and streamlining workflows. Our goal is to empower our users by providing them with innovative tools that can help them achieve more with less effort.

Empowering human potential

We recognize the potential of AI, including generative-AI technology like GPT, as a valuable ally in improving human capabilities. By incorporating GPT into our product, we aim to augment the skills and expertise of our users, allowing them to focus on more strategic and creative tasks while the AI assists with repetitive or time-consuming work.

Transparency and accountability

At Responsive, we are committed to being transparent about our use of AI and GPT technology. We continuously strive to provide our customers with clear and accurate information about how our product integrates AI, the benefits it brings, and any potential limitations. Moreover, we hold ourselves accountable for ensuring that the AI we deploy aligns with ethical standards and supports the best interests of our users.

Your success is our priority

Our use of AI in our product is driven by the desire to help our customers succeed while keeping their data secure and private. We are dedicated to the responsible use of AI and committed to ensuring that our product empowers users, enhances their capabilities, and ultimately, contributes to their success. With Responsive, you can trust that your data is safe, your privacy is respected, and that the technology we provide is designed to support and elevate your work, not replace it.

We will continue to listen to your feedback, prioritize your needs, and adapt our product to ensure that it remains a valuable and secure tool for your growth and success.

Frequently Asked Questions (FAQs)

  • Is GPT assistant native to Responsive or an external service?

    Currently this is an external service. We are using OpenAI’s GPT API for our generative-AI and GPT assistant features.

  • Is Responsive adding ChatGPT to Responsive’s response management platform?

    No. Responsive is bringing OpenAI’s generative-AI capabilities via OpenAI’s GPT technology to Responsive’s response management platform. ChatGPT is a chatbot application built by OpenAI using GPT technology.

  • How is Responsive’s GPT assistant different from ChatGPT?

    ChatGPT is a chatbot application built by OpenAI, and is made available to the general public by OpenAI.

    Responsive’s GPT assistant is a writing-assistance tool developed exclusively by Responsive and integrated into Responsive’s response management platform. It uses the same underlying technologies and large language model as ChatGPT, but it is distinct from ChatGPT.

    Data transmitted via GPT assistant WILL NOT be used for training public-facing language models by OpenAI.

  • Will customers’ data be used to train the language models?

    No. Customer data will not be used to train the machine-learning models. Responsive has opted out of OpenAI using any data sent from Responsive to OpenAI for training.

  • What level of exposure and opportunity is presented via ChatGPT interface to Responsive? For example: input of potentially commercially sensitive information to an open application, and its subsequent incorporation into the open model?

    None. ChatGPT is a specific chatbot application built by OpenAI. Responsive is not integrating with ChatGPT.

    We use OpenAI’s private GPT API to build the GPT assistant features. The interactions between Responsive and OpenAI are private, and we have opted out of any data transmitted from Responsive being used to further train the GPT models. Consequently, there is no exposure of our customer’s commercially sensitive data being exposed to the open model or the public.

  • Is there a possibility of content being exposed to other customers, or on the internet, via texts generated by this technology?

    No. Customer data will never be used to improve the machine-learning models; therefore, customer data will never be used to generate future texts by the technology we use.

  • Are there any privacy concerns with this technology?

    No. Responsive does not pass any information about our customers or users, such as names and email addresses, to the machine learning model.

  • When fine-tuning GPT models, would customer data be used to train the whole GPT model?

    No. When we implement fine-tuning in the future, we will use customer data to fine-tune a model that will be specific to Responsive. These fine-tuned models will be owned and managed by Responsive, and will not be used by the public-facing GPT models, or by other customers.

  • Could the GPT technology generate misinformation?

    Possible. As with most machine learning models, GPT can produce inaccurate or misleading outputs. Users must carefully review the content, and make appropriate edits.

    Human in the loop (HITL) When using this technology, our customers agree to put users in the loop (human-in-the-loop). Users should be aware of the limitations of the system, and have access to any information needed to verify the outputs, and review the outputs before using it.