Bridging Integrity and Innovation: An AI Agent to Monitor, Evaluate, and Validate Student Effort in AI-Assisted Coursework
WIUT AI-Assisted Learning Effort Agent Proposal
Bridging Integrity and Innovation: An AI Agent to
Monitor, Evaluate, and Validate Student Effort in AI-Assisted Coursework
2. Executive Summary
With the rapid adoption of generative AI tools like ChatGPT,
higher education institutions face a new challenge: distinguishing authentic
learning from AI-generated outputs. This proposal introduces an AI Agent
that acts as an intelligent intermediary between students and AI tools.
It records prompts, evaluates learning interactions, and calculates an
"Effort Score"—creating a transparent learning trace. Once a student
reaches a predefined effort threshold, their use of AI is considered constructive
and permissible, supporting authentic learning outcomes and academic
integrity.
3. Problem Statement
- Students
increasingly use AI tools to generate coursework.
- Academics
are burdened with detecting AI usage, often using questionable detection
tools.
- A
combative environment undermines trust, and shifts focus away from
learning.
- Genuine
learners who engage with AI meaningfully are unfairly penalized.
4. Solution Overview: The AI Effort Evaluation Agent
What it does:
- Acts
as a proxy between the student and AI tools (e.g., ChatGPT).
- Logs every
prompt and response.
- Evaluates
effort in real time using NLP heuristics and metadata.
- Builds
a dynamic Effort Score that only increases.
- Generates
a final report that includes:
- Full
interaction history
- Learning
summary
- Effort
milestones
- AI
transparency log
Outcome:
- Student
may submit AI-assisted work with full transparency.
- Academics
see a traceable learning journey.
- Integrity
is preserved, learning is encouraged.
5. Flowchart: AI Agent Interaction Lifecycle
Student Login -> Prompt Sent to AI Agent -> Agent Forwards
to ChatGPT <- Agent Receives & Logs Response
--->
Calculate Effort Rating
--->
Analyze Prompt Depth
--->
Detect Iterative Refinement
Update Cumulative Effort Score
Generate Learning Summary
Threshold Reached?
|
| (e.g., Score >=
100) |
--------------------------
/ \
Yes No
/ \
--------------------- ----------------------
| Allow Final Export
| | Keep Logging Efforts |
| With Auth Summary
6. Core Components & Architecture
1. Frontend (Student Interface)
- Chat-like
UI
- Visual
tracker of Effort Score
- Request
final report
2. Backend Agent Server
- Node.js
/ ASP.NET Core API
- Prompt
logging
- Effort
calculation engine
- Learning
summary generator
3. Effort Calculation Engine (Core IP)
- Prompt
quality heuristic
- Interaction
richness (follow-ups, clarifications)
- Meta-learning
tracking (e.g., summary requests, code iterations)
- No
penalty, only reward (effort only grows)
4. Database (PostgreSQL / MongoDB)
- User
sessions
- Prompt–response
pairs
- Timestamped
effort logs
- Exportable
JSON / PDF reports
5. Educator Dashboard
- Visual
trace of AI usage per student
- Report
download
- Optional
feedback interface
7. Benefits to WIUT
✅ Reduces academic dishonesty
stress
✅
Encourages constructive use of AI
✅
Provides evidence-based evaluation
✅
Scalable to UG, PG, and PhD levels
✅
Reinforces WIUT's commitment to innovation with integrity
8. Future Enhancements
- Plagiarism
+ Effort Correlation Reports
- Integration
with SRS
- Peer
review option
- Feedback
sentiment analysis
- AI
tutor suggestions based on student effort areas
9. Call to Action
We propose to:
- Build
a working prototype in 2–3 months
- Pilot
with select students and academic supervisors
- Present
dashboard and interaction reports to validate feasibility
10. Appendix: UI Mockups (To Be Designed)
- Student
chat screen
- Live
effort tracker
- Educator
dashboard
- Final
summary PDF
Comments
Post a Comment