Guidance and tools for secure, ethical use of AI at HMS

Generative AI offers new opportunities in research, education, and productivity — but it must be used thoughtfully.

This page builds on Initial guidelines for the use of Generative AI tools at Harvard to outline HMS recommendations and approved tools to help you use AI responsibly, in line with policies and data security requirements. Learn how to choose the right tool, meet compliance standards, and protect confidential information.  

Responsible use

Use AI tools thoughtfully and within academic boundaries.
Integrity Accuracy

Responsible AI use at HMS includes ethical, transparent practices:

  • Verify outputs. AI-generated content may be incorrect or fabricated. Always fact-check before using or publishing results.
  • Uphold academic integrity. Follow faculty or student handbook policies. Be transparent with students and colleagues about acceptable use.
  • Cite AI contributions appropriately in research and academic work. Refer to guidance from COPE and WAME for citation standards.

Data and security

Protect sensitive information and maintain compliance.
Data levels Phishing

Before using any generative AI tools, ensure you're complying with Harvard’s data and security policies:

  • Avoid entering confidential information (Level 2 and above) into public-facing AI platforms.
  • Classify your data appropriately. Most AI tools are approved for Level 3 and below. For Level 4 or regulated data, contact hms-it-ai@hms.harvard.edu before proceeding.
  • Consult HMS IT for support with data security, platform vetting, and vendor risk. Use this form to request help.
  • Comply with regulations such as HIPAA, FERPA, and sponsor-specific data requirements.
  • Legal and funding agency compliance is essential. Align AI use with NIH, COPE, and other relevant bodies.
  • Beware of phishing. Generative AI can be used for malicious purposes like phishing. Stay alert and report suspicious messages to phishing@harvard.edu.
  • Procurement support for licensing terms or contracts is available by sending an email message to procurement@hms.harvard.edu.

Getting started

Steps to access and begin using generative AI tools.
Access Training

To begin using AI tools at HMS, choose the right tool based on your use case:

  • Text and code creation – Harvard AI Sandbox, ChatGPT Edu
  • Image creation – Harvard AI Sandbox, Adobe Firefly
  • Compare models – Harvard AI Sandbox
  • Custom GPTs – ChatGPT Edu
  • Information retrieval – ChatGPT Edu, Harvard AI Sandbox
  • App development – HUIT API Portal, HMS Azure AI, or Longwood Cluster

After you have selected the tools that you need:

  1. Complete the required training listed on the tool's service page.
  2. Submit a request form to gain access to the tool.
  3. Contact support teams with any issues or questions related to the tools.

Supported tools

Compare features, use cases, and access requirements.
Eligibility Use case

The following table summarizes key information about available generative AI tools at HMS:

ToolUse casesData security levelEligibilityTraining requiredCost
AI SandboxCreate content; compare modelsLevel 3Quad-based faculty, staff, MD/DMD students; some others via coursesYesCovered by HMS IT
ChatGPT EduText, images, custom GPTs, info retrievalLevel 3Licensed users with specific needsYesCovered by HMS IT
Adobe FireflyImage creationLevel 3Users with Adobe Creative CloudNoIncluded in Adobe license
HMS Azure AIAzure OpenAI API access, large dataset analysisLevel 4HMS IT, project developersNoVaries with use
HUIT API PortalAPI access for app developmentLevel 3HMS IT, project developersNoVaries with use
Longwood ClusterHigh-compute AI workloadsLevel 4HMS researchersNoFree for DIA winners

Tools and policies

Harvard-endorsed services for the community.

Data protection and IT security requirements.

How to handle confidential data when using AI.

Ethics and regulations

Authorship ethics for AI-generated content.

Editorial guidance for medical writing.

AI usage in NIH-supported research.