Review these recommendations carefully and adhere to these principles while leveraging generative AI tools for work at Harvard.
University-wide recommendations
In July 2023, Harvard University announced initial guidelines for using generative artificial intelligence (AI) tools like ChatGPT and Google Bard. Generative AI is a rapidly evolving field. HMS and Harvard University will continue monitoring developments and update the community regularly.
HMS recommendations
AI data must be managed responsibly, considering the same integrity and security as any other data in our research or institution.
For responsible usage, incorporate these safeguards:
- Consult agency guidelines – Refer to your funding agency or sponsor's regulations and consult journals regarding AI tool usage in research publications, aligning with standards from the Committee on Publication Ethics (COPE) and the World Association of Medical Editors (WAME).
- Secure confidential data – Adhere to HIPAA, FERPA, or other regulations to protect sensitive information before inputting data into any platform or toolset.
- Legal compliance – Ensure that AI-generated data meets legal and funding agency requirements.
- HMS IT consultation – Contact HMS IT for advice on data security and vendor risk management concerning the use, safety, and privacy of HMS data when using AI tools.
- Procurement process – For assistance with contracting terms and the procurement of AI tools, contact HMS Procurement at procurement@hms.harvard.edu.
Additional considerations
While the University promotes responsible use of generative AI tools, bear in mind the following:
- Data privacy – Refrain from entering confidential data (Level 2 and above) into publicly accessible generative AI tools per the University's Information Security Policy.
- Content responsibility – Be mindful that AI-generated material can sometimes be incorrect or fabricated. Review the content rigorously before publication to avoid misinformation.
- Academic integrity – Familiarize yourself with the policies in student and faculty handbooks. While the guidelines are evolving, ensure clarity and understanding among students regarding the acceptable use of AI tools in academics.
- Phishing awareness – Be vigilant of AI-enabled phishing attempts and adhere to security best practices, reporting suspicious messages to phishing@harvard.edu.