Generative AI can lighten routine administrative work, from drafting correspondence to summarizing meetings, but its use with university data must stay deliberate and transparent. This section outlines practical applications, privacy safeguards, the approval path for new tools, and links to training resources so staff can adopt AI responsibly.

Practical Examples of AI Use Cases

  • Use AI tools to generate first drafts of standard communications such as reminders, status updates, and meeting summaries, being sure to check, edit, and update AI output.
  • Convert long meeting transcripts into concise bullet‑point summaries highlighting key decisions, action items.
  • Analyzing qualitative data to find themes and patterns for further review and verification.
  • Ask AI tools to help solve coding problems or create complicated spreadsheet formulas.
     

General Administrative Tasks

Use Case
Description

Email Drafting & Editing

Quickly draft replies, meeting recaps, reminders, or policy summaries, then review and edit.

Meeting Notes & Summaries

Use AI to clean up or condense meeting transcripts or notes, then review and edit.

Form Letter & Template Generation

Draft consistent communications for outreach, student services, financial aid, etc, always review and edit.

 

Data, Planning, & Reporting

Use Case
Description

Survey Summary Analysis

Paste open-ended survey results and get a first-draft summary of themes (then validate!).

Report Outlines

Generate first-draft structures or content suggestions for annual reports or strategy documents.

Brainstorming Session Support

Use AI to synthesize brainstorming notes or create idea categories.

Data Privacy image

Data Privacy Cautions

Use of AI tools must always align with university policies, procedures, and guidance. The university offers several AI tools through your university account, which are detailed on the AI Tool Use & Confidentiality page. When using all other commercially or publicly available AI tools, avoid inputting any sensitive information. 

Staff must not independently acquire or subscribe to AI-based tools or services without following university procurement and contract approval processes. All third-party tools must meet institutional standards for data security, accessibility, and legal compliance. This includes both free and paid services.

AI Tool Use & Confidentiality

AI Literacy Micro-Modules

Below are three short primers, each designed to be read in about two minutes, that cover common pitfalls and best practices when working with generative AI.

Large‑language models predict the next word rather than retrieve verified data; when the context runs thin they invent material. This hallucination risk can surface as fake citations, imaginary statistics, or fabricated quotations.

Why Does AI Hallucinate?

Generative AI systems produce inaccurate content for three main reasons:

  1. Training Data Sources: AI models learn from internet data that contains misinformation, so they reproduce falsehoods without checking facts.
  2. How Generative Models Work: These systems predict what comes next based on patterns, focusing on plausible-sounding content rather than accuracy.
  3. Design Limitations: The technology can't distinguish between true and false information, so it generates new inaccurate content by combining patterns.

AI hallucinations result from flawed training data, pattern-based generation, and fundamental technology limits.

How to keep control

  • Treat AI output as a draft, never a final product.
  • Ask the model for DOIs or URLs, then check them; fake links break on inspection.
  • Cross‑check facts and references in databases or peer‑reviewed literature.
  • Verify any assertions or claims you make so you can document or cite them. 

Take‑away: always verify with an authoritative source.

Deeper dive: Park, D. M., & Lee, H. J. (2024). Literature Review of AI Hallucination Research Since the Advent of ChatGPT: Focusing on Papers from arXiv. Informatization Policy, 31(2), 3-38.

AI echoes its training data, which often favor dominant groups. This can appear as the omission of underrepresented scholars, stereotyped language, or skewed sentiment analysis.

Why Does AI Produce Biased Content?

Generative AI systems produce biased content for three main reasons:

  1. Training Data Sources: AI models learn from internet data that reflects existing social and cultural biases from human-created content.
  2. How Generative Models Work: These systems reproduce patterns from their training data, including discriminatory language and perspectives they've observed.
  3. Design Limitations: The technology can't identify or correct biases, so it amplifies existing prejudices by combining biased patterns in new ways.

AI bias results from biased training data, pattern reproduction, and the inability to recognize discriminatory content.

Spot and reduce bias

  • Prompt for multiple perspectives or under‑represented voices.
  • Ask the model to list its main sources and examine their diversity.
  • Compare AI summaries with scholarly reference sources or peer-reviewed sources.
  • Note in your assignment how you checked for bias.

Bottom line: inclusive prompts and critical reading can help reduce the impact of algorithmic echo chambers.

Deeper dive: Zhou, M., Abhishek, V., Derdenger, T., Kim, J., & Srinivasan, K. (2024). Bias in generative AI. arXiv preprint arXiv:2403.02726.

Public AI services often store prompts and reserve the right to review them. Some retrain on user data. Uploading sensitive material can violate privacy laws or research ethics.

Why Does AI Pose Privacy Risks?

AI systems create privacy concerns for users in three main ways:

  1. Training Data Collection: AI models are trained on vast quantities of data, much of which is composed of copyrighted material, and using copyrighted works to train AI models may constitute prima facie infringement.
  2. User Data Retention: Every query, instruction, or conversation with ChatGPT is stored indefinitely unless deleted by the user, including sensitive data like personal details and proprietary information.
  3. Inadvertent Data Exposure: There is a risk of inadvertently capturing and exposing sensitive user data, particularly in chatbots where personal or confidential information is often disclosed.

AI privacy risks stem from unauthorized use of copyrighted training data, indefinite storage of user conversations, and potential exposure of sensitive information shared with AI tools.

Safeguards

  • Strip names, IDs, and unpublished data.
  • Avoid uploading copyrighted texts.
  • Use university‑approved tools for institutional data.
  • Consult your course professor or faculty advisor regarding high‑risk cases.

Key point: treat public AI tools like social media—anything you post could become public.

Deeper dive: Golda, A., Mekonen, K., Pandey, A., Singh, A., Hassija, V., Chamola, V., & Sikdar, B. (2024). Privacy and security concerns in generative AI: a comprehensive survey. IEEE Access.

Professional Development

 

Resource

Description

Link

EDUCAUSE: 
Artificial Intelligence (AI) Library / Topic Page

A curated collection of research, case studies, guidelines, and frameworks about how AI is being used in higher education — for teaching, administration, policy, etc. It includes items like “AI on campus,” “AI classroom use,” and “AI policy & legislation.” 

 

Useful for staff/leadership to build awareness of opportunities + risks of AI in institutional settings. 

https://library.educause.edu/topics/infrastructure-and-research-technologies/artificial-intelligence-ai

Alison: 
Basics of Prompt Engineering

A free, online IT course that teaches how to design prompts for generative AI, including how prompts work, basic and advanced prompting methods (zero-shot, few-shot), generating realistic images (e.g. with DALL-E), using style modifiers and outpainting, best practices. 

 

Good for staff who will use or support generative AI tools. 

https://alison.com/course/basics-of-prompt-engineering

UMD Smith School: 
Free Online Certificate “Artificial Intelligence and Career Empowerment”

Designed for early-to-mid career professionals. This certificate program introduces foundational AI literacy, shows how AI is transforming functional areas (marketing, finance, operations, etc.), responsible AI practices, and includes a career empowerment component (job search, negotiation, entrepreneurship). Provides lectures, expert interviews, etc. 

https://www.rhsmith.umd.edu/programs/executive-education/learning-opportunities-individuals/free-online-certificate-artificial-intelligence-and-career-empowerment

Simplilearn / SkillUp:
Prompt Engineering Course

Beginner-level self-paced course (~1 hour video content) covering essentials of prompt engineering: what it is, how LLMs work, prompt refinement, bias detection & mitigation, improving prompt outputs, etc. Includes a completion certificate. 

 

Good for staff who want quick hands-on prompt engineering training. 

https://www.simplilearn.com/prompt-engineering-free-course-skillup