Braintrust AIR: Ensuring Compliance with U.S. and EU AI Regulations
This document outlines the compliance measures Braintrust AIR follows to align with U.S. and EU AI regulations. It highlights how we ensure transparency, fairness, and data protection while strictly avoiding bias, deceptive AI marketing, unauthorized AI-generated content, and discriminatory hiring practices. Braintrust AIR is designed to assist in the hiring process without violating any regulatory requirements.
Regulatory Compliance
Braintrust AIR adheres to U.S. and EU regulations by implementing human oversight, explainability, and privacy safeguards to mitigate risks and ensure ethical AI use.
How We Handle Bias
Braintrust AIR does not make final hiring decisions or determine which candidates move forward in the process—those decisions are made entirely by humans. Our AI serves as a support tool, offering detailed insights and interview summaries while ensuring all hiring choices remain in human hands. To promote fairness, Braintrust AIR:
- Delivers clear scorecard insights and full interview recordings to empower hiring teams in their decision-making.
How We Handle Data
Braintrust AIR prioritizes data security and privacy by strictly limiting the information we collect and store.
- No facial recognition or PII: We do not specifically ask for personally identifiable information beyond a candidate’s name and email, which are securely stored on our servers.
- SOC 2 Certified: Braintrust AIR is in the process of obtaining SOC 2 certification, ensuring we adhere to the highest industry standards for data confidentiality, integrity, and availability.
- GDPR and CCPA Compliance: Our system adheres to industry standard data protection standards.
- Full auditability: All AI-driven insights, including interview recordings, scoring, and summaries are documented and auditable to maintain regulatory compliance. Braintrust AIR remains committed to ethical AI use, ensuring compliance with all relevant regulations while supporting fair, transparent, and human-driven hiring decisions.
Braintrust AIR Compliance with U.S. Federal AI Laws
While there is currently no federal AI law following an overturned policy by the Biden administration, Braintrust AIR remains fully compliant with existing state regulations. We adhere to FTC guidelines, and various state-level AI transparency and anti-bias laws, ensuring ethical, transparent, and fair AI-driven hiring practices.
FTC Guidelines on AI Marketing & Fairness
Compliance: The Federal Trade Commission (FTC) mandates that AI-powered recruitment marketing and hiring claims must be backed by data and not misleading.
How Braintrust AIR Ensures Compliance:
- Clearly states how AI is used in the hiring process.
- Does not overpromise AI capabilities without supporting evidence.
- Braintrust advertises the importance of human recruiters to make all final decisions.
- Regularly reviews AI-driven hiring metrics to ensure accuracy.
Braintrust AIR Compliance with US State AI Laws
Braintrust AIR ensures ethical, transparent, and legally compliant AI-driven hiring by adhering to state laws on bias prevention, data privacy, and AI transparency. Our AI does not make hiring decisions—human recruiters remain in control.
We adhere to data protection and privacy standards, ensuring data security and privacy while maintaining clear candidate and client disclosures. Regular audits, risk management, and bias testing keep our AI fair and compliant.
Braintrust AIR does not use facial recognition or unauthorized AI-generated content and follows best practices to align with evolving state and federal AI regulations. The following sections outline our compliance with key state laws.
1. Illinois AI Hiring Regulations (Anti-Bias Laws)
Compliance: Illinois requires AI-based hiring tools that make hiring decisions to be audited for bias to prevent discrimination.
How Braintrust AIR Avoids Violations:
- Braintrust AIR does not make hiring decisions—AI is used solely for recommendations.
- Maintains a human-in-the-loop approach, ensuring recruiters review AI suggestions to prevent discriminatory outcomes.
2. California Consumer Privacy Act (CCPA) & AI Transparency
Compliance: CCPA requires AI-driven platforms to disclose data collection practices and provide users with control over their personal data.
How Braintrust AIR Avoids Violations:
- Discloses how AI processes candidates' data and their rights in our Terms of Service and Privacy Policy.
- Provides clients with the flexibility to offer alternative interview methods for candidates.
- Candidates who complete the interview have the option to request data deletion.
- Secures personal and hiring data in compliance with state and federal privacy laws.
3. Tennessee’s "ELVIS Act" & AI-Generated Content
Compliance: The ELVIS Act prohibits unauthorized AI-generated likenesses or voices.
How Braintrust AIR Avoids Violations:
- Does not use AI-generated candidate profiles or manipulate real candidates' likenesses.
- Ensures AI-generated recommendations are based on real candidate skills and experience.
4. Utah AI Disclosure Law
Compliance: Utah requires businesses to disclose when AI is being used in consumer interactions.
How Braintrust AIR Avoids Violations:
- Clearly informs candidates that AI is being used in the hiring process and requires them to opt-in prior to starting the interview.
- AI does not make any hiring decisions.
5. Executive Order 14110 (Before Repeal) – Ethical AI Development
Compliance: While repealed, this order influenced AI safety and security principles that many companies still follow.
How Braintrust AIR Avoids Violations:
- Ensures AI is used to enhance fairness rather than reinforce bias.
- Complies with evolving industry standards and best practices.
6. New York City AI Hiring Law (Bias Audits & Transparency)
Compliance: New York City mandates that employers conduct annual bias audits on AI-driven hiring tools, publicly disclose results, and notify candidates of AI use in advance while offering alternative selection methods for products that make hiring decisions.
NYC 144 Notice: Employers must notify candidates before using an AEDT (Automated Employment Decision Tool), detailing its use, evaluation criteria, data sources (if not online), and alternative process options. Notices can be via a website, job posting, email, or mail.
How Braintrust AIR Avoids Violations:
- Candidate Notification: Notifies NYC-based job seekers of AI use in hiring decisions.
- Alternative Selection Process: Allows companies to provide manual review or accommodations for their candidates.
- Compliance Monitoring: Ensures AI does not make hiring decisions or recommendations, nor does it substantially assist or replace discretionary decision-making.
- Braintrust AIR as an AEDT: Braintrust AIR qualifies as an AEDT as it utilizes AI-driven processes to analyze candidate data and provide recommendations. However, it does not make final hiring decisions, ensuring compliance while supporting employers in their selection process. Candidates are informed of AIR’s use based on each client's preference—via website, email, or job posting.
7. Maryland AI Hiring Law (Facial Recognition Restrictions)
Compliance: Maryland prohibits employers from using facial recognition to create facial templates during job interviews unless the candidate explicitly consents via a signed waiver. The waiver must include the candidate’s name, interview date, consent confirmation, and acknowledgment of reading the waiver.
How Braintrust AIR Avoids Violations:
- No Facial Recognition Use: Braintrust AIR does not use facial recognition technology in interviews or hiring processes.
- AI-Based Hiring Without Biometric Data: AI-driven matching is based solely on skills, experience, and qualifications rather than biometric or facial data.
- Candidate-First Approach: Ensures all AI-assisted hiring remains bias-free, privacy-focused, and compliant with Maryland regulations.
8. Colorado AI Hiring Law (High-Risk AI & Algorithmic Discrimination)
Compliance: Beginning February 1, 2026, Colorado requires employers using high-risk AI for hiring to prevent algorithmic discrimination, establish AI risk management programs, conduct annual impact assessments, notify users about AI use, and allow candidates to appeal AI-driven decisions or correct inaccuracies. If discrimination is found, it must be reported to the state attorney general within 90 days.
How Braintrust AIR Avoids Violations:
- Annual Impact Assessments: Can conduct yearly AI audits and reassesses models within 90 days of major updates.
- Candidate Notification & Transparency: Notifies job seekers about AI usage in the hiring process while clarifying that Braintrust AIR does not make hiring decisions.
Braintrust AIR Compliance with EU AI Laws
Braintrust AIR adheres to the EU AI Act and GDPR standards, prioritizing transparency, fairness, and human oversight in AI-driven hiring. Our AI does not make final hiring decisions—human recruiters remain in control.
We comply with High-Risk AI standards, ensuring bias prevention, explainability, and data protection while avoiding banned AI practices like facial recognition, social scoring, and emotion tracking. Candidates are fully informed about AI’s role and can request data deletion.
Braintrust AIR undergoes regular risk audits, maintains strict security protocols, and prevents AI misuse, aligning with evolving EU regulations to ensure ethical, transparent, and legally compliant AI-driven hiring. The following sections outline our compliance with key EU laws.
1. EU AI Act – Risk-Based Classification
Compliance: The EU AI Act categorizes AI systems into four risk categories: Unacceptable, High, Limited, and Minimal Risk.
How Braintrust AIR Avoids Violations:
- Falls Under "High-Risk" AI Systems (since it impacts employment decisions).
- Meets High-Risk AI Compliance Standards:
- Transparency: Inform candidates and clients on AI’s role in the hiring process.
- Human Oversight: AI-driven job matching does not operate without final human review, preventing automated rejections.
- Bias Prevention: Regular audits ensure fairness across demographic groups.
- Does NOT Fall Under "Unacceptable Risk" (banned AI practices like social scoring, manipulative behavioral tracking).
- Braintrust AIR does not exploit candidate vulnerabilities or use AI to coerce behavior.
2. Prohibited AI Practices – Ensuring Ethical Use
Compliance: The AI Act bans specific AI uses, including real-time biometric identification, emotion recognition in employment, and AI social scoring.
How Braintrust AIR Avoids Violations:
- No Real-Time Biometric Scanning: Braintrust AIR does not use facial recognition, emotion tracking, or voice profiling.
- No AI-Driven Social Scoring: Candidate rankings are skills-based, not based on behavioral tracking.
- Full Transparency in AI Decisions: Candidates and employers see why AI recommends specific candidate matches.
3. AI Transparency & Data Rights (GDPR + AI Act)
Compliance: The AI Act and GDPR (General Data Protection Regulation) require companies to:
- Clearly disclose AI’s role in decision-making.
- Allow users to request data deletion.
- Provide explanations for AI-driven decisions.
How Braintrust AIR Avoids Violations:
- AI Decision Disclosure: Users are informed when AI plays a role in the hiring process.
- Right to Explanation: While Braintrust AIR does not make hiring decisions, candidates can request insights on how AI-generated scorecards are made.
- Right to Be Forgotten: Users can request to delete their data at any time, adhering to GDPR.
- No Hidden AI Usage: Braintrust AIR does not use AI to make decisions; the humans in the recruiting process maintain all final decisions.
4. AI Act Rules for General-Purpose AI (GPAI)
Compliance: The AI Act imposes additional requirements on powerful AI models, including:
- Risk assessments.
- Reporting obligations.
- Safeguards against misuse.
How Braintrust AIR Avoids Violations:
- Ongoing Risk Audits: AI-driven hiring is regularly tested to ensure fairness.
- Security & Responsible AI Usage: Prevents AI manipulation, protecting both employers and candidates.
- Limited Personalization, No Manipulation: AI enhances hiring efficiency without exploiting biases.