NATIONWIDE - NOVEMBER 2025 - (USAnews.com) Ami Kumar, co-founder of Contrails AI, is at the forefront of developing artificial intelligence solutions that protect children from digital harms. At the 2025 Family Online Safety Institute Annual Conference, Kumar addressed the increasing threats posed by generative AI tools, which enable the creation of synthetic media targeting children. The conference buzzed with tension as global leaders discussed how to address these emerging dangers, and Kumar’s participation emphasized the urgency of her team’s work.
The Genesis of a Mission-Driven Startup
Kumar's path to founding Contrails AI began long before AI-generated threats became a widespread issue. With over 15 years of experience in Trust & Safety roles, she witnessed firsthand how digital platforms often became breeding grounds for predators, particularly targeting vulnerable children. This frustration with ineffective detection tools and delayed responses drove her to create a solution focused not just on identifying harm, but on preventing it.
At Contrails AI, Kumar emphasizes a proactive, empathetic approach to digital safety: “Technology can be part of the problem or part of the solution,” she said during the conference. Contrails AI’s focus is on prevention, early detection, and empowering human moderators with explainable AI tools, which go beyond traditional content moderation to ensure swift, informed action.
Building Explainable AI for Child Protection
Contrails AI’s differentiator in the crowded content moderation space is its commitment to explainable AI. Unlike traditional AI models, which often flag harmful content without providing insight into their reasoning, Contrails AI ensures that every detection is transparent. This transparency allows moderators and regulators to understand how and why decisions are made, addressing the "black box" problem that hampers many AI systems.
For child safety, this explainability is crucial. False positives could delay legitimate content, while false negatives could allow harmful material to slip through. By making AI decisions traceable and auditable, Contrails AI provides a system that human moderators can trust and improve over time.
The Technology Behind Child Protection
Contrails AI’s approach combines computer vision, machine learning, and human-centered design to detect synthetic media. The company’s multimodal deepfake detection capabilities analyze video, audio, and text simultaneously, offering unprecedented accuracy in identifying threats, even across languages and cultural contexts.
The technical architecture, led by co-founder Digvijay Singh, allows Contrails AI’s systems to scale across platforms and adapt to new threats. Singh's experience in building large-scale fraud and safety detection systems ensures the tools can handle millions of pieces of content daily, while evolving to meet new AI-generated threats.
Addressing Gendered Harms and Digital Abuse
Kumar’s participation in the panel on gender and online harm underscored another critical aspect of Contrails AI’s work. The company’s research reveals that AI-generated content disproportionately targets women and girls, with deepfake pornography and sextortion schemes on the rise. This gendered harm requires specialized detection approaches that consider both the technical characteristics of the media and the social contexts in which it's deployed.
Contrails AI’s models address these issues by recognizing patterns indicative of gender-based targeting. This approach reflects Kumar’s belief that effective AI safety must not only consider technology but also the human impact, particularly in marginalized communities.
Real-World Impact and Global Partnerships
Unlike many AI safety companies still in development, Contrails AI has already deployed its solutions globally across social media platforms, content-sharing sites, and child protection agencies. These real-world implementations have provided valuable feedback, further refining the company’s approach. Their tools are particularly effective at detecting sophisticated deepfake content that evades traditional detection methods, offering protection against both video and metadata manipulation.
Child protection agencies using Contrails AI report significant improvements in response times and detection accuracy. The company's explainable AI features have also proven invaluable in helping investigators build stronger cases by documenting exactly how harmful content was identified.
The Transparency Advantage
In an industry where many AI companies keep their methodologies secret, Contrails AI’s commitment to transparency has set them apart. This openness has made them a trusted partner for regulators who need to validate AI tools before endorsing them. The company's documentation of training data, bias testing, and monitoring protocols provides assurance to those working in child safety, ensuring that the technology is both reliable and understandable.
This transparency also allows for faster adoption across different regulatory environments, as platforms and regulators can directly examine the decision-making process behind the AI’s actions.
Agentic Workflows and Proactive Protection
Contrails AI is also innovating with "agentic workflows," which allow the system to automatically flag, label, and report harmful content, offering detailed explanations for each action. These workflows operate with different levels of autonomy, ensuring that obvious threats are addressed immediately while more ambiguous cases are flagged for human review. This graduated approach ensures both speed and accuracy in content moderation.
Global Scale and the Need for Scalable Solutions
The growing volume and sophistication of AI-generated harmful content targeting children present an immense challenge. Traditional content moderation, relying on human reviewers and basic detection methods, cannot keep up. Contrails AI’s automated detection and explainability features allow platforms to scale their protection efforts without overburdening human moderators, ensuring both children and reviewers are safeguarded.
Industry Recognition and Future Directions
Contrails AI’s commitment to explainable AI and its real-world impact have garnered recognition in the industry, including at the FOSI conference. Their solutions are already helping platforms and child protection organizations tackle synthetic media threats more effectively. As AI governance frameworks continue to develop, the company’s transparency positions them well to navigate regulatory environments and maintain trust with stakeholders.
Looking ahead, Contrails AI is exploring applications of its technology beyond child safety, including financial fraud detection and medical diagnosis support. However, their primary focus remains on protecting children, with ongoing efforts to make their tools more accessible to smaller platforms and organizations that lack extensive resources.
Conclusion
Ami Kumar and the Contrails AI team are proving that AI can be both effective and ethical. As the digital landscape continues to evolve and new threats emerge, their work is shaping how society can protect the most vulnerable users, particularly children, from the growing dangers of AI-generated content. Through transparency, explainable AI, and proactive protection, Contrails AI is leading the charge to ensure that technology is part of the solution, not the problem, in the fight for online child safety.













