Safety & Liability Framework

Last Updated: January 12, 2025

1. Our Commitment to Safety

At Helios Frontier LLC, we recognize that deploying artificial intelligence in educational settings carries significant responsibility. This Safety & Liability Framework outlines our comprehensive approach to protecting students, accepting accountability, and maintaining transparency.

Core Principle: We accept full legal and financial responsibility for harm caused by AI-generated content on the Project Nova™ platform, subject to the limitations outlined in our Terms of Service.

2. Multi-Layer Safety System

2.1 Content Filtering (Layer 1): Automated systems that prevent inappropriate topics:

  • Blocked Topics: Politics, religion, violence, sexual content, hate speech, self-harm, illegal activities
  • Real-time Scanning: Every AI response is analyzed before delivery to students
  • Keyword Blacklists: Continuously updated database of inappropriate terms
  • Context Analysis: Machine learning models detect subtle inappropriate content

2.2 Teacher Monitoring (Layer 2): Human oversight and intervention capabilities:

  • Real-Time Dashboard: Teachers see all student conversations as they happen
  • Emergency Stop Button: Immediately halt any conversation with one click
  • Student Pause: Temporarily disable AI access for individual students
  • Transcript Export: Download complete conversation history for review
  • Alert System: Automatic notifications when content filters are triggered

2.3 Audit Logging (Layer 3): Comprehensive record-keeping for accountability:

  • Complete Transcripts: Every AI conversation is permanently logged
  • Filter Matches: Record of all content that triggered safety filters
  • Teacher Actions: Log of all interventions and emergency stops
  • System Events: Technical logs of AI model behavior and responses
  • Retention Period: Safety logs retained for 7 years

2.4 Continuous Improvement (Layer 4): Ongoing safety enhancements:

  • Weekly Reviews: Safety team analyzes flagged conversations
  • Filter Updates: Content filters improved based on real-world usage
  • AI Model Updates: Regular updates to underlying AI systems
  • External Audits: Third-party safety assessments annually

3. Financial Responsibility Framework

Liability Acceptance

Helios Frontier LLC accepts full legal and financial responsibility for any harm caused to students by AI-generated content on the Project Nova™ platform.

3.1 Covered Harms: We accept liability for:

  • Psychological Harm: Emotional distress caused by inappropriate AI responses
  • Educational Harm: Provision of factually incorrect or misleading information
  • Privacy Violations: Unauthorized disclosure of student information
  • Discriminatory Content: Biased or discriminatory AI responses
  • Safety Failures: Failure of safety systems to prevent harmful content

3.2 Liability Limits: As outlined in our Terms of Service Section 9:

  • Per-Incident Cap: Greater of $10,000 or total amount paid by user in past 12 months
  • Aggregate Annual Cap: $100,000 per calendar year across all incidents
  • Exclusions: Indirect, consequential, or punitive damages (except where prohibited by law)

3.3 Claims Process: To file a liability claim:

  1. Report incident to [email protected] within 30 days
  2. Provide detailed description of harm and supporting evidence
  3. Include conversation transcripts (if available)
  4. Describe impact on student (medical records, counseling reports, etc.)
  5. State compensation requested and justification

3.4 Response Timeline:

  • Acknowledgment: Within 48 hours of claim submission
  • Investigation: Completed within 14 business days
  • Resolution: Decision communicated within 30 days
  • Payment: If approved, compensation paid within 15 business days

4. Public Incident Disclosure

4.1 Transparency Commitment: We commit to full transparency when AI-generated content causes harm to students. All safety incidents are publicly disclosed on our Safety Incidents page.

4.2 Disclosure Timeline:

  • Within 72 Hours: Initial incident report published
  • Within 7 Days: Detailed investigation findings released
  • Within 30 Days: Corrective actions and system improvements documented
  • Ongoing: Updates provided as investigation progresses

4.3 Incident Report Contents: Each public disclosure includes:

  • Incident Summary: What happened and when (anonymized)
  • Root Cause Analysis: Why the safety systems failed
  • Student Impact: Nature and severity of harm (privacy-protected)
  • System Response: How safety systems performed
  • Corrective Actions: Immediate steps taken to prevent recurrence
  • Long-Term Improvements: Planned system enhancements

4.4 Privacy Protection: All incident reports are anonymized to protect student privacy. No personally identifiable information is disclosed without explicit parental consent.

5. Safety Metrics and Reporting

5.1 Public Safety Dashboard: We publish quarterly safety reports including:

  • Total Conversations: Number of AI interactions across platform
  • Content Filter Triggers: How many responses were blocked by safety systems
  • Teacher Interventions: Number of emergency stops and student pauses
  • Reported Incidents: Safety concerns reported by teachers and parents
  • Confirmed Harms: Verified cases where AI content caused harm
  • System Improvements: Safety enhancements implemented each quarter

5.2 Current Safety Statistics: (Updated Quarterly)

Total AI Conversations:0 (Pre-Launch)
Content Filter Blocks:0
Teacher Interventions:0
Reported Safety Concerns:0
Confirmed Harm Incidents:0

5.3 Independent Verification: All safety metrics are audited annually by third-party safety experts to ensure accuracy and completeness.

6. Teacher Rights and Responsibilities

6.1 Unrestricted Monitoring Rights: Teachers have absolute rights to:

  • View all student AI conversations in real-time
  • Access complete conversation history at any time
  • Export transcripts for review or record-keeping
  • Monitor multiple students simultaneously
  • Review flagged content and safety alerts

6.2 Intervention Powers: Teachers can immediately:

  • Stop any AI conversation with emergency stop button
  • Pause student access to AI features
  • Adjust content filtering settings (whitelist/blacklist topics)
  • Report safety concerns directly to our safety team
  • Request immediate investigation of concerning content

6.3 Teacher Responsibilities: Teachers agree to:

  • Actively monitor student AI interactions during class time
  • Intervene immediately if inappropriate content appears
  • Report safety incidents to parents and administrators
  • Complete safety training before enabling AI features
  • Review safety guidelines and best practices regularly

6.4 No Jurisdictional Limits: These teacher rights apply universally and cannot be restricted by any terms, conditions, or local regulations. Teacher safety and monitoring capabilities are non-negotiable.

7. Parent Rights and Reporting

7.1 Access Rights: Parents have the right to:

  • View all AI conversations involving their child
  • Export complete conversation transcripts
  • Review safety incidents and teacher interventions
  • Access learning progress and achievement data
  • Request deletion of their child's data at any time

7.2 Control Rights: Parents can:

  • Disable AI features for their child
  • Set additional content restrictions
  • Limit daily usage time
  • Receive email alerts for flagged content
  • Withdraw consent and delete account

7.3 Reporting Mechanism: Parents can report safety concerns:

  • Email: [email protected] (monitored 24/7)
  • Safety Incidents Page: projectnova.fun/safety
  • Parent Dashboard: Report button in conversation viewer
  • Phone: Emergency hotline (coming soon)

8. Legal and Regulatory Compliance

8.1 Applicable Laws: We comply with:

  • COPPA: Children's Online Privacy Protection Act (United States)
  • FERPA: Family Educational Rights and Privacy Act (United States)
  • GDPR: General Data Protection Regulation (European Union)
  • GDPR-K: Children's privacy provisions under GDPR
  • PIPEDA: Personal Information Protection Act (Canada)
  • UK GDPR: UK data protection laws
  • Age Appropriate Design Code: UK children's privacy code

8.2 Governing Law: This Safety & Liability Framework is governed by the laws of the State of Delaware, United States, as outlined in our Terms of Service.

8.3 Dispute Resolution: Safety-related disputes are resolved through binding arbitration, except where prohibited by law or where immediate injunctive relief is required to protect student safety.

9. Contact and Emergency Response

Safety Contact Information

General Safety Concerns:

Email: [email protected]

Response Time: Within 24 hours

Urgent Safety Issues:

Email: [email protected]

Response Time: Within 2 hours

Liability Claims:

Email: [email protected]

Response Time: Within 48 hours

Legal Inquiries:

Email: [email protected]

Helios Frontier LLC

Website: projectnova.fun

Safety Dashboard: projectnova.fun/safety

BY USING THE SERVICE, YOU ACKNOWLEDGE THAT YOU HAVE READ AND UNDERSTOOD THIS SAFETY & LIABILITY FRAMEWORK.

This framework is part of our Terms of Service and Privacy Policy. In case of conflicts between documents, the most protective provision for students shall apply.