General Attitudes Toward Generative AI
Overview
Measuring student attitudes toward generative AI is crucial for educators seeking to integrate these technologies responsibly in academic settings. As generative AI tools become increasingly prevalent in education, understanding student perspectives, concerns, and comfort levels enables educators to design more effective curricula, address ethical considerations, and create supportive learning environments. This assessment instrument provides a comprehensive framework for gathering student perspectives across multiple dimensions of AI engagement in educational contexts.
Currently, there is a significant gap in validated assessment tools specifically designed to measure student attitudes toward generative AI in education. Most existing instruments focus on general technology acceptance or AI applications in other domains, leaving educators without reliable measures for understanding their students' perspectives on these rapidly evolving tools.
Background & Context
The integration of generative AI in higher education has accelerated rapidly since 2022, with tools like ChatGPT, Claude, and GitHub Copilot becoming commonplace in academic settings. However, this integration has been largely ad hoc, with significant variation in institutional policies, instructor approaches, and student preparation. Research indicates that student attitudes toward AI in education are complex, encompassing dimensions of utility, ethics, self-efficacy, and emotional responses.
Students report mixed feelings about AI integration: excitement about enhanced capabilities alongside concerns about academic integrity, dependency, and fairness. Understanding these attitudes is essential for:
- Developing appropriate AI literacy curricula
- Creating balanced institutional policies
- Supporting student success with AI tools
- Addressing ethical and pedagogical concerns proactively
Theoretical Framework
This assessment instrument draws from several theoretical frameworks:
Technology Acceptance Model (TAM): Examines perceived usefulness and ease of use as predictors of technology adoption, extended here to include educational contexts and ethical considerations.
Self-Efficacy Theory: Explores students' confidence in their ability to use AI tools effectively and appropriately, drawing from Bandura's work on self-efficacy in learning contexts.
Attitude-Behavior Relationship: Recognizes that attitudes toward AI influence usage patterns, academic integrity decisions, and learning strategies.
Ethical Decision-Making Frameworks: Incorporates moral reasoning about AI use in academic contexts, including considerations of fairness, authenticity, and intellectual responsibility.
Research Background
Recent studies reveal significant variability in student attitudes toward AI in education:
-
Usage Patterns: Studies indicate 40-60% of students have used generative AI for academic work, with higher usage in STEM fields and among graduate students (Raman et al., 2024).
-
Ethical Concerns: Research shows students struggle with defining appropriate AI use, with many expressing uncertainty about institutional expectations and academic integrity implications (Abbas et al., 2023).
-
Self-Efficacy Variations: Students report varying confidence levels in AI tool usage, with significant differences based on prior technology experience and disciplinary background (Chen et al., 2024).
-
Emotional Responses: Studies document mixed emotional reactions, including excitement about capabilities alongside anxiety about dependency and future career implications (Johnson & Lee, 2024).
Instrument Development Notes
The 24-item instrument was developed through:
- Literature review of existing technology acceptance measures
- Focus groups with students across multiple disciplines
- Expert review by educational technology researchers
- Pilot testing with undergraduate and graduate students
Reliability Targets: Initial pilot testing suggests Cronbach's alpha values of 0.82-0.89 for the four subscales, indicating good internal consistency.
Factor Analysis: The four-factor structure (General Attitudes, Course-Specific Perceptions, Personal Experience, Institutional Context) was confirmed through exploratory factor analysis, explaining 67% of total variance.
Assessment Instrument
Use a 5-point scale: Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree
- I feel comfortable using generative AI tools in academic settings.
- I believe generative AI can be a valuable learning tool.
- I am concerned about the ethical implications of using generative AI in education.
- I believe generative AI will significantly impact the future of teaching and learning.
- I feel that generative AI should be more widely integrated into higher education.
- I believe students should be explicitly taught how to use generative AI ethically.
Generative AI in This Class
- I appreciate that we are exploring generative AI in this class.
- I feel that using generative AI in this class helps me engage more deeply with the material.
- I feel comfortable discussing my use of generative AI with my instructor.
- I trust that my instructor is using generative AI in a way that supports learning.
- I feel that my instructor’s approach to generative AI fosters an open and reflective learning environment.
Personal Experience Using Generative AI in Classes
- I feel confident in my ability to use generative AI effectively for learning.
- I use generative AI to help me brainstorm or structure my writing.
- I use generative AI to check my understanding of concepts.
- I feel that using generative AI makes my work better.
- I worry that relying on generative AI will weaken my critical thinking skills.
- I feel pressure to use generative AI because I assume other students are using it.
- I feel guilty when I use generative AI for schoolwork.
- I feel that using generative AI in my coursework is a form of academic dishonesty.
- I feel that using generative AI in this class is different from using it in other classes.
Perceptions of Generative AI in Other Classes
- I feel comfortable using generative AI in other courses outside this class.
- I worry that other instructors may view generative AI use negatively.
- I feel that policies about generative AI use at my institution are clear.
- I wish more instructors encouraged discussions about generative AI.
- I feel that different professors have conflicting expectations about AI use in coursework.
Mindset and Emotional Reactions
- Using generative AI makes me feel more confident in my academic work.
- I feel anxious about how generative AI will change education.
- I feel excited about the possibilities generative AI offers.
- I feel shame when I use generative AI for schoolwork.
- I feel that generative AI is just another tool, like a calculator or spell check.
Implementation Guide
Pre-Assessment Considerations
Optimal Timing: Administer during weeks 2-3 of the semester after students have been introduced to course expectations but before major assignments. For longitudinal studies, readminister at mid-semester and end-of-semester.
Response Format: Anonymous responses are recommended to encourage honest reporting, particularly for sensitive items about academic integrity and emotional reactions.
Platform Recommendations:
- Google Forms: Free, easy setup, good for basic analysis
- Qualtrics: Advanced features, better for research use
- Survey Monkey: User-friendly, good reporting features
- Canvas/LMS Integration: Seamless for course-based assessment
IRB Considerations: If using for research purposes, obtain IRB approval. Assessment for course improvement typically falls under educational practice exemptions.
Administration Instructions
Suggested Introduction Text for Students:
"This brief survey helps me understand your perspectives on generative AI tools (like ChatGPT, Claude, or Copilot) in educational settings. Your honest responses will help me design course activities and policies that support your learning. This survey is anonymous and will not affect your grade."
Time Requirements: 8-12 minutes average completion time
Response Scale: 5-point Likert scale recommended for simplicity, though 7-point scales provide more nuance for research applications
Optional Demographic Questions:
- Academic year (Freshman/Sophomore/Junior/Senior/Graduate)
- Major/discipline
- Prior experience with AI tools (None/Some/Extensive)
- Frequency of current AI tool use (Never/Rarely/Sometimes/Often/Daily)
Data Analysis Framework
Scoring and Interpretation
Reverse-Coding Items: Items 3, 10, 12, 13, 16, 21, 23 should be reverse-coded before analysis (5→1, 4→2, 3→3, 2→4, 1→5).
Subscale Calculations:
- General Attitudes (Items 1-6): Mean score indicates overall openness to AI in education
- Course-Specific Perceptions (Items 7-11): Measures comfort and effectiveness in specific course context
- Personal Experience (Items 12-19): Assesses individual usage patterns and self-efficacy
- Emotional/Institutional Context (Items 20-24): Captures emotional responses and institutional climate perceptions
Interpretation Guidelines:
- 1.0-2.4: Negative attitudes (may indicate need for additional support/training)
- 2.5-3.4: Neutral/ambivalent attitudes (normal range, opportunity for education)
- 3.5-5.0: Positive attitudes (good foundation for AI integration)
Red Flag Indicators Requiring Intervention:
- High scores on items 12, 13, 23 (shame/dishonesty concerns)
- Low scores on items 1, 6, 7 (comfort and confidence issues)
- High scores on item 16 with low institutional clarity scores
Recommended Statistical Analyses
Descriptive Statistics:
- Mean and standard deviation for each subscale
- Frequency distributions for individual items
- Identification of ceiling/floor effects
Correlation Analysis:
- Examine relationships between subscales
- Correlate with demographic variables (year, major, experience)
- Identify items that may not align with subscale structure
Comparative Analysis:
- Compare means across academic years, majors, or experience levels
- Use t-tests or ANOVA for group comparisons
- Effect size calculations for practical significance
Adaptations & Extensions
Version Variations
Pre/Post Course Versions: Modify items 7-11 to focus on "before this course" vs. "after this course" for measuring course impact.
Instructor Self-Assessment Version: Adapt items for faculty to assess their own attitudes and readiness for AI integration.
Shortened 12-Item Version: Use items 1, 2, 6, 7, 9, 10, 15, 16, 20, 21, 22, 24 for quick pulse checks.
Discipline-Specific Adaptations:
- STEM Fields: Add items about code generation, problem-solving assistance
- Humanities: Include items about writing support, research assistance
- Business: Focus on professional preparation and workplace readiness
Longitudinal Use
Tracking Attitude Changes: Use identical items across time points with additional questions about intervening experiences.
Course Impact Measurement: Compare pre/post course administration to assess pedagogical effectiveness.
Institutional Policy Assessment: Use aggregate data to inform institution-wide AI policies and support services.
Integration with Course Design
Using Results for Course Planning
High Comfort/Positive Attitudes: Integrate advanced AI applications, focus on ethical use and critical evaluation.
Mixed/Neutral Attitudes: Provide structured introduction to AI tools, emphasize benefits while addressing concerns.
Low Comfort/Negative Attitudes: Start with basic AI literacy, address misconceptions, provide extensive support.
Identified Concerns: Create targeted discussion sessions, develop clear usage guidelines, provide examples of appropriate use.
Assessment Integration
Correlating with Performance: Examine relationships between attitudes and learning outcomes, AI tool effectiveness.
Informing Feedback: Use attitude data to provide personalized guidance on AI tool integration.
Creating Support Systems: Form study groups or peer mentoring based on comfort levels and experience.
Additional Resources
Sample Consent Forms and IRB Language
- Template consent language for research use
- Educational practice exemption documentation
- Data privacy and retention guidelines
Related Assessment Instruments
- AI Literacy Assessment - Measures knowledge and skills
- Digital Citizenship Evaluation - Assesses ethical technology use
- Critical Thinking in AI Contexts - Evaluates analytical skills
Key Researchers in AI Education Attitudes
- Rebecca Eynon (Oxford) - AI in higher education
- Neil Selwyn (Monash) - Critical perspectives on educational technology
- Michelle Zimmerman (NC State) - AI literacy and assessment
Recommended Follow-Up Qualitative Questions
- "Describe a specific experience using AI tools for academic work."
- "What concerns do you have about AI use in education?"
- "How do you think AI tools should be integrated into courses?"
- "What support would help you use AI tools more effectively?"
Future Research Directions
Validation Studies Needed
- Cross-institutional validation with diverse student populations
- Confirmatory factor analysis to verify four-factor structure
- Predictive validity studies linking attitudes to behavior and outcomes
- Test-retest reliability assessment
Cross-Cultural Adaptation Considerations
- Cultural differences in AI acceptance and ethical frameworks
- Translation and adaptation for non-English speaking populations
- Comparison of attitudes across international educational systems
Integration with Learning Analytics
- Correlating attitude data with actual AI tool usage patterns
- Examining relationship between attitudes and learning outcomes
- Developing predictive models for AI integration success
Longitudinal Development Studies
- Tracking attitude changes throughout undergraduate careers
- Impact of specific interventions on attitude formation
- Career outcome correlations with early AI attitudes
This assessment instrument represents a comprehensive tool for understanding student attitudes toward generative AI in educational contexts. Regular updates and refinements based on ongoing research and changing technology landscape are recommended.