Artificial intelligence use guidelines from an IT leadership perspective - Artificial Intelligence - The University of Utah
Skip to content
Search Campus
Artificial intelligence use guidelines from an IT leadership perspective
Donna Roach
- University of Utah Health Chief Information Officer |
Steve Hess
- University of Utah Chief Information Officer
To further our collective commitment to high-quality patient care, education, and
research, we recognize that the potential transformative benefits of emerging, experimental
artificial intelligence (AI) tools such as ChatGPT must be used with caution so that
we avoid known pitfalls such as bias, privacy violations, copyright infringement,
and inaccuracy. These guidelines are intended to support institutional standards for
privacy, safety, ethical practices, and data security.
Safeguarding privacy and data security:
AI tools such as ChatGPT and associated data fall within the public domain and lack
security and data regulation compliance features. Therefore, when using public AI
tools never enter or upload sensitive or restricted data per Rule 4-004C Data Classification
and Encryption, including protected health information (PHI), FERPA-protected data,
personally identifiable information (PII), donor information, intellectual property
(IP) information, and payment card industry (PCI) information. Note: A Business Associate
Agreement (BAA) must be on file with the University of Utah Health Privacy Office
for AI-related and other IT products/vendors that store or process PHI.
Mitigating misinformation:
AI-generated responses can be false. When an AI tool is trained with data containing
inaccuracies, its output is prone to be unreliable. This is particularly evident in
AI tools that draw from extensive datasets available on the internet and other public
sources, which often include inaccurate information. Consequently, if AI tool output
is used as a source in research or authoring documents, the output must be verified,
and the AI tool cited. Such usage could potentially compromise the organization's
credibility if not verified for appropriateness and accuracy.
Confronting bias in AI output:
Data used by AI tools may contain biases. Therefore, biases may be reflected in AI
output. For example, if the training data reflects negative stereotypes or prejudiced
views, the AI might produce responses that align with those biases. It is important
that when the AI output is used it is reviewed and edited to correct bias that may
be present.
Upholding copyright and academic integrity:
Many public domain AI tools do not provide specifics about the data sources used
to train the technology. As a result, there is a risk that AI tools generate responses
that are copyrighted, without proper attribution. AI tools should never be used in
place of healthcare recommendations and peer-reviewed research. As with misinformation
and bias, AI output must be reviewed for copyrighted material to avoid plagiarism
and misattribution.
AI as a continuous learning tool:
Through strategic integration, AI can enrich and supplement formal training and continuous
learning, but does not replace it. There are many sources available to stay informed
and safely utilize AI technology in advancing healthcare initiatives, teaching, and
research.
Reporting:
Promptly report concerns regarding entry of
sensitive or restricted data into an AI tool
, or a suspected IT security breach, to the
University of Utah Information Security Office
iso@utah.edu
or 801-587-1925.
Further information:
Additional guidelines for AI use have been published by the university, including
Guidance on the use of AI in research
Fall 2023 instructional guidelines on generative AI
and
The data privacy “GUT check” for synthetic media like ChatGPT
. If you have questions about AI in general, specific tools, or these general guidelines,
please submit them using the
contact form