Testing AI/ML-based applications is challenging, given the lack of a standardized approach. The current models’ coverage is inadequate and lacks tests for security, privacy, and trust.
Assuring the quality of AI/ML-based projects needs a different approach from traditional testing. We need tests that highlight bias and fairness issues in models. And models that focus on transparency, interpretability, and explainability of decisions.
Understanding the key elements for testing these applications vastly improves the overall testing effectiveness and inspires confidence in the deployment models.
This fireside webinar explores and evaluates the fresh tools needed to test AI/ML models comprehensively. Key takeaways:
The webinar will cover the following:
- The lifecycle of an AI / ML project
- Testing for AI/ML vs traditional projects - A comparison
- The test-first AI/ML processes and what they deliver (aspects of edge cases, security explainability, privacy, etc.)
- A short demo of Trigent-testAIng product

Dr. Srinivas Padmanabhuni
CTO, testAIng

Diwakar Menon
Head - QA & Testing, Trigent Software
Topic:
Why current data science test processes in AI/ML are insufficient
- Thursday, Sep 16, 2021
- 11 AM – 12 PM ET
- Virtual Webinar