Tools & Access at CC
While CC doesn’t officially endorse or prohibit specific tools, the list below highlights how different Generative AI tools vary in risk, from generally safer options to those that require more caution. If you’d like to use a GenAI tool—or if a tool you already use introduces a new AI feature—ITS asks that you submit it for review through the tech adoption process before using it.
Tier 1: Tools That Run Locally
These tools are the overall safest to use because they run on your computer’s local hardware and do not “phone home” in any sense (they don’t export data or use your prompts to further train the models).
- Mac OS and iOS embedded AI tools
- Whisper
- Ollama
- Jan
- Ensu
Tier 2: Enterprise Tools
These are not officially endorsed tools, but with enterprise access, they do offer stronger privacy protections than open versions.
- Microsoft Copilot (good for administrative work)
- With web-based Office 365 apps (Outlook, Word, Excel, teams): When you’re signed in with your CC account, you’re using the enterprise version of Copilot. Your prompts and data will not be used to train public GenAI models, and Copilot can only see the CC information you already have permission to access.
- Copilot in the Microsoft Edge browser: If you’re signed into Edge or Bing with your CC account, you’re also using the enterprise version with the same privacy protections. If you’re not signed in with your CC account, you are using the consumer versions, which does not have such protections.
Tier 3: Generally Good Options
These tools may be helpful depending on your goals. As always, use them transparently and thoughtfully, and follow course or departmental expectations.
- Creative Work: Adobe Creative Suite GenAI tools are generally designed to support, not replace, human creativity.
- Writing: Grammarly can be helpful for brainstorming, adjusting tone, and proofreading (checking for proper spelling, punctuation, and grammar).
- Coding: Cursor or Windsurf can be powerful tools, but take care to avoid over-reliance.
- Chatbots: Anthropic or Haystack are designed to acknowledge uncertainty and reduce guessing. More broadly trained tools are more prone to producing errors.
- Learning/Tutoring: Khanmigo was developed from the Khan Academy learning library.
- Embedded AI in applications: You might see AI tools in other software you use such as Salesforce, Slate, Coursedog etc. Run these through tech adoption before using them so ITS can review security concerns and contract language.
- Discovering new tools: There's An AI For That is a directory for comparing GenAI tools.
Tier 4: Higher-Risk Options
These tools may be popular, but they raise significant concerns around accuracy, privacy, or misuse.
- Search engine tools (e.g., Gemini): Quick summaries can introduce bias and reduce critical evaluation of sources.
- Voice and video generation tools (e.g., ElevenLabs, Sora, Veo): These are frequently used for deepfakes and raise ethical issues of consent, attribution, and harm.
- Highly generalized AI chatbots (e.g., Gemini, ChatGPT, Meta, Grok): Prone to producing confident sounding but inaccurate responses
- Browser-based GenAI plug-ins (e.g., Perplexity, Gemini Chrome plug-in): May collect or transmit more data than intended
- Agentic AI browers (e.g., Comet, ChatGPT Atlas): Agentic AI in general is high risk as these tools complete tasks and have access to your information without your intervention or oversight.
Safety Recommendations
Whenever possible, choose GenAI tools that provide enterprise-level privacy protections. Even then, avoid entering any information you wouldn't want shared.
Do not input:
- FERPA-protected student information: grades, schedules, records
- HIPAA-related health data: medical or counseling details
- Personally identifying information: Social Security Numbers, addresses, birthdates, passwords
- Confidential college data: budgets, contracts, donor records
- Sensitive research data: unpublished or restricted materials
Note: Most Generative AI detection tools are not an accurate way to confirm whether GenAI has been used, and they should not be relied upon for academic or administrative purposes.
Always review GenAI outputs for:
- Accuracy
- Bias
- Completeness
- Tone
Agentic AI—or autonomous systems that proactively achieve multi-step tasks without human intervention—should be avoided in academic and administrative settings. Agentic AI carries a high risk of cascaded errors, security and data concerns, or misalignment with original intent. Always keep a human "in the loop!"