# Ethics Application Outline: AI Workspace Study

## 1. Study Overview

### 1.1 Purpose
Evaluate the effectiveness and user experience of AI-assisted programming tools (AI Workspace) in educational contexts, specifically testing integration of voice-based AI agents (OpenAI Realtime) and terminal-based coding agents (Claude Code) for programming tasks.

### 1.2 Research Questions #TODO
- How do students interact with multi-agent AI programming assistance?
- What are the benefits and challenges of voice vs. text-based AI coding support?
- How does AI assistance affect learning outcomes and programming confidence?

### 1.3 Participants
- Target group: Programming students (undergraduate/graduate level)
- Expected sample size: [TBD]
- Recruitment: [Voluntary participation through course announcements]

## 2. Psychological Risks and Ethical Considerations

### 2.1 Identified Risks

#### 2.1.1 Frustration and Overwhelm
**Risk**: Students may experience frustration when:
- AI agents provide incorrect or unhelpful suggestions
- Technical issues disrupt workflow (connection failures, tool errors)
- Multiple agents provide conflicting advice
- Voice interaction fails to understand intent accurately

**Severity**: Low to Moderate

#### 2.1.2 Comparison Anxiety
**Risk**: Students may compare their performance unfavorably to:
- AI-generated code quality
- Perceived speed/efficiency of AI problem-solving
- Other students' AI-assisted outcomes

**Severity**: Moderate

#### 2.1.3 Career and Study Path Doubts
**Risk**: Exposure to advanced AI coding capabilities may trigger:
- Questioning career viability ("Will AI replace programmers?")
- Self-doubt about programming aptitude ("Am I good enough?")
- Uncertainty about investment in programming education
- Imposter syndrome intensification

**Severity**: Moderate to High

#### 2.1.4 Over-reliance and Skill Atrophy Concerns
**Risk**: Students may:
- Worry about becoming dependent on AI assistance
- Feel guilty about using AI tools ("Am I cheating?")
- Question authenticity of their learning achievements

**Severity**: Low to Moderate

#### 2.1.5 Data Privacy Concerns
**Risk**: Anxiety about:
- Code and conversation data collection
- Academic integrity implications
- Potential surveillance/monitoring

**Severity**: Low (with proper mitigation)

### 2.2 Vulnerable Populations
- Students with pre-existing anxiety or imposter syndrome
- First-generation college students
- Students from underrepresented groups in computing
- Students with learning disabilities or neurodivergence

## 3. Mitigation Strategies

### 3.1 Pre-Study Interventions

#### 3.1.1 Educational Framing
- Frame AI as a **tool**, not a replacement for programmers
- Emphasize skill development: learning to work *with* AI is a valuable professional skill
- Provide context: current limitations and future trajectory of AI in software development
- Normalize challenges: AI tools are experimental and imperfect

#### 3.1.2 Setting Realistic Expectations
- Clear communication about:
  - Study is evaluating the **tool**, not the student
  - Technical issues are expected (this is research software)
  - Frustration is normal and valuable feedback
  - No impact on grades or academic standing

#### 3.1.3 Peer Discussion Session
- Pre-study group discussion about:
  - AI in programming: opportunities and limitations
  - Professional perspectives on AI-assisted development
  - Addressing common concerns and misconceptions

### 3.2 During Study Interventions

#### 3.2.1 Psychological Support Access
- Designated research contact person for concerns
- Clear escalation path to university counseling services
- Regular check-ins during study period
- Optional peer support groups for participants

#### 3.2.2 Right to Withdraw
- Explicit, repeated reminders that:
  - Participation is voluntary
  - Withdrawal is possible at any time without penalty
  - No explanation required for withdrawal
  - Data will be deleted upon withdrawal (if requested)

#### 3.2.3 Frustration Management
- Provide clear technical support channels
- "Escape hatch" options: ability to complete tasks without AI if desired
- Time limits on tasks to prevent excessive struggle
- Explicit permission to report bugs/issues without feeling like "failure"

#### 3.2.4 Ongoing Communication
- Regular reminders about study purpose and scope
- Transparent sharing of common issues/bugs
- Acknowledgment of participant feedback in study updates

### 3.3 Post-Study Interventions

#### 3.3.1 Debrief Session
- Group discussion covering:
  - Study findings and participant contributions
  - Contextualization of AI capabilities and limitations
  - Future of AI in programming: realistic outlook
  - Career guidance: skills that remain uniquely human

#### 3.3.2 Individual Follow-Up
- Optional one-on-one sessions for participants who:
  - Experienced significant distress
  - Raised concerns during study
  - Request additional support

#### 3.3.3 Resource Provision
- Information about:
  - Career counseling services
  - Programming mentorship programs
  - AI literacy resources
  - Professional development opportunities

## 4. Informed Consent

### 4.1 Consent Process
- Written consent form provided **before** study begins
- Verbal explanation of study by researcher
- Opportunity to ask questions
- 24-hour consideration period before participation begins
- Re-consent checkpoints for longer studies

### 4.2 Consent Content
Must include clear information about:

#### 4.2.1 Study Purpose and Procedures
- What participants will do
- Expected time commitment
- Technical requirements

#### 4.2.2 Risks and Benefits
- Explicit enumeration of psychological risks (see Section 2.1)
- Potential benefits: skill development, exposure to new tools
- Acknowledgment that benefits may not materialize for all participants

#### 4.2.3 Data Collection and Use
- What data is collected (conversations, code, interactions, surveys)
- How data is stored and protected
- Who has access to data
- Data retention period
- Anonymization procedures
- Publication/sharing plans

#### 4.2.4 Rights and Protections
- Right to withdraw at any time
- Right to request data deletion
- Confidentiality protections
- No academic penalties for withdrawal
- Contact information for questions/concerns
- Ethics committee contact for complaints

#### 4.2.5 Special Considerations
- Disclosure of experimental nature of software
- Technical limitations and potential bugs
- No guarantee of functionality
- Independence from course grades

### 4.3 Capacity and Voluntariness
- Ensure participants are not coerced (no course requirement)
- No undue inducement (reasonable compensation if offered)
- Special attention to power dynamics (instructors should not recruit)

## 5. Data Handling and Privacy

### 5.1 Data Collection

#### 5.1.1 Types of Data
- **Conversation logs**: All interactions with AI agents (text and voice transcripts)
- **Code artifacts**: Written code, version history, file operations
- **Tool usage metrics**: Commands used, session duration, error rates
- **Survey responses**: Pre/post-study questionnaires, experience ratings
- **Observational notes**: Optional session observations (with explicit consent)

#### 5.1.2 Data Minimization
- Collect only data necessary for research questions
- No collection of:
  - Identifiable personal information in code comments
  - Unrelated browser activity
  - External communications

### 5.2 Data Storage and Security

#### 5.2.1 Technical Measures
- **Encryption**: All data encrypted at rest and in transit
- **Access controls**: Role-based access, multi-factor authentication
- **Secure infrastructure**: University-approved servers or equivalent
- **Database isolation**: Separate databases for different participant groups
- **Audit logging**: Track all data access

#### 5.2.2 Anonymization
- Participant IDs replace names/identifiers in datasets
- Key linking IDs to identities stored separately, encrypted
- Code review to remove identifiable information before analysis
- Aggregation for publication (no individual-level reporting unless necessary)

#### 5.2.3 Data Retention
- **Active study period**: Full data retention
- **Post-study**: [X years] retention for verification/reanalysis
- **After retention period**: Secure deletion of identifiable data
- **Anonymized data**: May be retained indefinitely for meta-analysis

### 5.3 Data Sharing

#### 5.3.1 Internal Access
- Principal investigator: Full access
- Research assistants: Anonymized data only, under confidentiality agreement
- Technical support: Minimum necessary access, logged

#### 5.3.2 External Sharing
- **Publication**: Only aggregated, anonymized data
- **Research repositories**: Anonymized datasets only, with participant consent
- **Third parties**: No sharing without explicit participant consent
- **Legal requirements**: Disclosure only as legally mandated (with participant notification if permitted)

### 5.4 Third-Party Services

#### 5.4.1 OpenAI API
- **Data transmission**: Conversations sent to OpenAI for processing
- **OpenAI policies**: Data not used for model training (per OpenAI enterprise agreement)
- **Disclosure**: Participants informed about third-party processing
- **Alternatives**: Option to participate using only local AI tools (Claude Code)

#### 5.4.2 Lively4 Server
- **Local hosting**: Lively4 server runs locally, not external cloud
- **Data control**: Full control over data, no external transmission (except OpenAI API)

### 5.5 Breach Protocol
- **Detection**: Automated monitoring for unauthorized access
- **Response**: Immediate notification to affected participants
- **Mitigation**: Incident response plan, security hardening
- **Reporting**: Ethics committee notification within 24 hours

## 6. Equity and Inclusion

### 6.1 Accessibility Considerations
- Ensure AI Workspace is accessible to students with:
  - Visual impairments (screen reader compatibility)
  - Hearing impairments (text alternatives to voice interaction)
  - Motor impairments (keyboard navigation)
- Provide accommodations as needed

### 6.2 Digital Divide
- Equipment provision for students without adequate computers
- Technical support for setup and troubleshooting
- No assumption of prior AI tool experience

### 6.3 Inclusive Recruitment
- Active outreach to underrepresented groups
- Materials in multiple languages if applicable
- Sensitivity to cultural differences in AI perception

## 7. Monitoring and Oversight

### 7.1 Ongoing Risk Assessment
- Weekly research team meetings to review participant feedback
- Incident reporting system for adverse events
- Threshold triggers for study pause/modification:
  - Multiple reports of significant distress
  - Systematic technical failures affecting experience
  - Unexpected risks emerging

### 7.2 Ethics Committee Reporting
- Adverse event reporting within 24-48 hours
- Quarterly progress reports to ethics committee
- Protocol amendment requests for significant changes

### 7.3 Participant Feedback Loop
- Anonymous feedback mechanism during study
- Regular surveys on experience and wellbeing
- Exit interviews to capture final reflections

## 8. Expected Benefits

### 8.1 Individual Benefits
- Exposure to cutting-edge AI programming tools
- Development of AI collaboration skills (career-relevant)
- Early access to productivity-enhancing technology
- Contribution to meaningful research

### 8.2 Societal Benefits
- Inform design of educational AI tools
- Evidence base for AI integration in CS education
- Understanding of human-AI collaboration in programming
- Guidance for responsible AI deployment in learning contexts

### 8.3 Benefit-Risk Balance
While psychological risks are present, they are:
- Mostly low to moderate severity
- Extensively mitigated through study design
- Balanced by meaningful individual and societal benefits
- Comparable to risks in other educational research

## 9. Researcher Qualifications and Training

### 9.1 Research Team
- Principal investigator: [Qualifications in HCI/CS education]
- Co-investigators: [Relevant expertise]
- Ethics training: All team members complete human subjects research training

### 9.2 Support Network
- Access to psychology/counseling consultants
- Technical support team for AI Workspace issues
- Ethics committee guidance available

## 10. Conclusion

This study presents moderate psychological risks, primarily related to frustration, self-doubt, and career anxiety when students interact with advanced AI programming agents. These risks are substantially mitigated through:

1. **Comprehensive informed consent** with explicit risk disclosure
2. **Proactive educational framing** about AI's role and limitations
3. **Robust psychological support** infrastructure and monitoring
4. **Strong data protection** measures ensuring privacy and security
5. **Right to withdraw** at any time without consequence

The expected benefits—both individual skill development and societal understanding of AI in education—justify the carefully managed risks. The study design prioritizes participant wellbeing while generating valuable insights into human-AI collaboration in programming education.

---

**Appendices** (to be attached):
- A. Informed Consent Form (draft)
- B. Recruitment Materials
- C. Pre-Study Information Sheet
- D. Survey Instruments
- E. Debrief Script
- F. Technical Architecture and Data Flow Diagram
- G. Data Management Plan
- H. Incident Response Protocol
