Secure generative AI for the Harvard community

The AI Sandbox provides a secure environment to explore Generative AI, minimizing security and privacy risks. Data entered into the Sandbox will not be used to train any vendor's large language models (LLMs).

The platform offers a single interface with access to the latest LLMs from OpenAI, Anthropic, Google, and Meta.

The AI Sandbox includes several tools to enhance your AI projects:

  • Data visualization – Generate charts and graphs to visualize data insights.
  • Multi-file upload – Upload multiple files to streamline data processing.
  • Image generation – Create visual content based on text prompts.

Use these links to request access and use the AI sandbox:

Eligibility

Available to Quad-based faculty and staff, and students as part of courses
Faculty Staff

The Harvard AI Sandbox is available to:

  • HMS and HSDM Quad-based faculty and staff – Faculty and staff at the Harvard Medical School (HMS) and Harvard School of Dental Medicine (HSDM) Quad can request access.
  • Students  – All degree students are automatically granted access.

Security

Approved for up to Level 3 data
Level 3 Private

The Harvard AI Sandbox is approved for up to Medium Risk Confidential (Level 3) data. Data you enter will not be used to train large language models.

Support

HMS provisions users, HUIT provides ongoing support
Provisioning Support

The HMS IT Teaching and Learning Technologies team provisions user accounts for Harvard Medical School (HMS) users in the Harvard AI Sandbox. To request access, use the AI Sandbox access request form.

Harvard University Information Technology (HUIT) is responsible for maintaining and supporting the Harvard AI Sandbox. For support with issues other than access requests, contact HUIT for help.

Citing generative AI appropriately will ensure that you avoid plagiarism.