GENERATIVE AI USE POLICY

(ChatGPT, code generation tools and other similar AI tools)


CG encourages efficiency and the use of new technology to improve our work product. Generative AI models (e.g. ChatGPT, Scribe,GPT-4, etc. as text generation tools; CodeStarter, GitHub CoPilot, Codex, etc. as code generation tools; other hosted Artificial Intelligence tools) can help research topics, draft communications, write code, and otherwise drive positive outcomes for the Company. As with all tools, every Connor Group professional is expected to use generative AI tools responsibly, legally and ethically. 


A primary concern with generative AI tools is that information submitted to that AI tool enters into that service’s “black box” and is continually re-used as part of the AI system’s learning model. Information or data entered into AI tools from Connor Group professionals may be retained in the AI system indefinitely, even becoming part of future results generated from the AI service queries made by other users such as competitors or malicious actors. Accordingly, as relating to generative AI tools, CG professionals are expected to comply with the usage guidelines outlined below and all other common and reasonable practices.  CG will continue to adjust these guidelines as appropriate. 


Accountability for AI-Generated Output

  1. Generative AI can be a useful tool, however, you remain responsible for the accuracy and quality of the content that is generated. Always verify the output of any AI-generated materials before incorporating them into any work product, deliverables or communications. As a reminder, all of our work products, deliverables and communications are subject to quality reviews by the appropriate team members in accordance with the firm’s policies. Use of generative AI does not obviate the need for such quality reviews.
  2. Professionals may never use generative AI tools to create any content that is illegal, discriminatory, defamatory, offensive, inappropriate or otherwise incongruent with Connor Group best practices.
  3. Exercise caution in utilizing AI-generated materials in making business decisions. Generative AI can provide insights and support, but it is essential to approach business-critical decisions with additional care. Always corroborate AI-generated recommendations with independent research, expert opinion, and sound judgment.
  4. Source code generated by an AI tool for systems, applications, or services must be committed to a CG approved repository, scanned for vulnerabilities, and published through existing pipelines prior to deployment.
  5. Intellectual property rights are a concern when using generative AI tools. Do not utilize any copyrighted or trademarked materials in your work and do not input any CG work product into a generative AI tool.

Public AI Use

  1. You have a legal and contractual obligation to maintain the confidentiality of client information as well as CG information. It is prohibited to input any sensitive, confidential or proprietary information into non-approved generative AI tools – including financial or strategic information, non-public information and personal information of a Client or information that pertains to CG. 
  2. Do not input any information about CG’s business processes or any other intellectual property of the Company – including templates, memos, implementation code, workbooks, business practices, workflows, charts, analysis, policies or strategy.
  3. Assume that there is no privacy or confidentiality for any data or information submitted to the generative AI tool. Expect the entire chat history and results to be public information. This remains true even if the generative AI Tool supports a “confidentiality” setting. Such “confidentiality” settings may not function as you intend or believe. Therefore, even if you opt-in to such settings, it remains prohibited to input confidential or sensitive data of the Company or any client into the generative AI tool. 
  4. Generative AI tools must be used in compliance with all applicable laws and regulations, including data protection and privacy laws.
  5. Do not engage in activities that could compromise the security or integrity of the Company or harm the reputation and brand of CG with its clients and the public.
  6. Limit personal information entered into public AI engines. Personal information entered into public generative AI engines could be mined by malicious actors to create more realistic phishing and social engineering attacks.

Approved AI Platforms

Connor Group has partnered with Microsoft and ChatGPT to license their respective Generative AI offerings for private company use. Connor Group's implementation creates privatized instances of these engines to maintain the privacy and security of data submitted there. Data entered in to these private-use tenants is NOT available to other AI Clients, NOT available to OpenAI or Microsoft, thus and NOT used to improve their respective AI models.

Caveats:

  • Connor Group's Acceptable Use policy still applies to the interaction with these tools; Illegal or unethical use is subject to disciplinary and legal repercussions.
  • Accountability for AI Generated output is still required! Private-use AI tenants have the same accuracy and bias issues inherent with public access Generative AI.

The following AI instances are approved for processing sensitive data :

Bing Chat Enterprise

Microsoft offers private licensed use of their AI engine (CoPilot) integrated into Microsoft's 365 environment. Bing Chat Enterprise is used as the front-end interface and creates an ephemeral, encrypted, and anonymized session for each user. Data entered in during these sessions is not monitored by Microsoft, is deleted after use, and is not used to train the AI model or refine information for other users. This so allows Connor Group to maintain our privacy and sensitivity requirements of data entered into their AI engine.

  1. Accessing Bing Chat Enterprise requires users to utilize Microsoft's Edge Browser and be signed in with their work account. 
    1. Validating the user account is only done to confirm licensing. No other tracking or data retention is utilized for the chat session.
  2. Bing Chat Enterprise does not support a chat history feature, meaning it does not retain chat prompts or responses outside the active session. 
  3. Microsoft deletes the session and associated data when the browser is closed or the user presses the 'New Topic' button.
  4. Chat data sent to and from Bing Chat Enterprise is encrypted, protecting data sent to and from the engine from unauthorized inspection. 

CG Chat

Connor Group has piloted and has released CG chat (cgchat.connorgp.com) as a secure, private version of ChatGPT. CGchat utilizes a private-use tenant from ChatGPT so data nor usage patterns are fed back into the model, protecting the privacy and security of the user.

  1. Accessing CG Chat requires users be signed in with their work account.
  2. CG Chat saves session information and is user-specific. Sessions are not accessible across users (Professional1 cannot see what Professional2 has entered into CG Chat).

Microsoft Co-Pilot

Connor Group has deployed Microsoft's CoPilot onto managed devices. Co-Pilot securely integrates a private GPT-4 instance with Microsoft Graph to allow AI access into Microsoft365 tenant. This allows Co-Pilot to search and reference content within Connor Group's SharePoint, OneDrive, Outlook, and Office implementations as part of it's responses.

  1. Accessing Co-Pilot requires users be signed in with their work account and using a Microsoft OS or Windows365 application.
  2. The Co-Pilot Windows OS client is only available and supported on corporate-owned devices.


Compliance and Reporting

  1. Engagement leaders and other professionals with supervisory duties are accountable for ensuring their teams are aware of and comply with this policy to ensure the use of any generative AI is appropriate, accurate and proper.
  2. Professionals should report any violations of this policy to their engagement leaders or Human Resources.


Please direct any questions to Human Resources.