AAIA ISACA Advanced in AI Audit - Set 5 - Part 1
Test your knowledge of technical writing concepts with these practice questions. Each question includes detailed explanations to help you understand the correct answers.
Question 1: A healthcare organization discovers their diagnostic AI model performs excellently during testing but fails dramatically when processing real patient data in production. The model memorized specific patterns from the limited test cases rather than learning generalizable medical indicators. What risk does this scenario primarily illustrate?
Question 2: A financial services company uses an AI system that processes customer transactions through multiple data storage locations including raw data repositories, data lakes, and vector databases. The security team wants to ensure sensitive information remains protected throughout the entire AI pipeline. Which approach best addresses this comprehensive data protection requirement?
Question 3: An autonomous vehicle company must ensure their AI system can be accurately recreated for regulatory investigations if an accident occurs. Investigators need to understand exactly what version of the AI was operating and how it was trained. What backup component is most critical for enabling this investigative capability?
Question 4: A retail company notices their product recommendation AI has been declining in accuracy over the past six months despite no changes to the model itself. Customer purchasing patterns have shifted significantly due to economic conditions and changing preferences that differ from historical training data. What operational concept best describes this phenomenon?
Question 5: A pharmaceutical company is developing an AI system to assist with drug interaction analysis. The project team wants to ensure development activities do not expose actual patient records or proprietary formulas while still enabling realistic testing and model training. Which development practice best addresses these conflicting requirements?
Question 6: An insurance company is building a claims processing AI and wants to integrate privacy protection from the initial design phase rather than adding it after development completes. The team seeks an approach that anticipates and prevents privacy issues before they occur. Which Privacy by Design principle does this approach represent?
Question 7: A cybersecurity firm discovers attackers have been systematically querying their threat detection AI with carefully crafted requests designed to understand how the model classifies different attack patterns. The attackers appear to be reconstructing the model's decision logic without direct access to the underlying code. What type of AI-specific attack does this scenario describe?
Question 8: A government agency requires their contractors to develop AI systems that automatically protect citizen data without requiring individuals to take special actions or change default settings. Personal information should be anonymized wherever possible without explicit user requests. Which Privacy by Design principle does this requirement embody?
Question 9: A logistics company wants to deploy their route optimization AI but must ensure clear procedures exist for reverting to the previous stable version if critical issues emerge after launch. The deployment team needs documented steps for quickly returning operations to a known working state. What deployment component addresses this requirement?
Question 10: A manufacturing company operates an AI quality control system that has been live for two years. Recently, performance metrics show increasing error rates despite no changes to the model itself. Analysis reveals that supplier material specifications have gradually shifted, causing inspection patterns to diverge from original training conditions. What type of drift does this scenario illustrate?
Question 11: A bank wants to ensure their loan approval AI can trace every decision back to specific inputs and processing steps for regulatory examination. Auditors must understand exactly what data influenced each decision and how the model processed that information. What capability does the bank need to implement for this regulatory requirement?
Question 12: An e-commerce platform experiences intermittent issues where their customer service chatbot occasionally provides incorrect product information and fails to maintain conversation context properly. The operations team needs tools that go beyond simple logging to understand why these failures occur and predict when similar issues might arise in the future. What capability does the team need?
Question 13: A social media company discovers their content moderation AI was trained on datasets that had been subtly corrupted by malicious actors who injected biased examples designed to make the model more permissive toward certain types of harmful content. What type of attack has compromised this AI system?
Question 14: A telecommunications company needs to update their network optimization AI to handle new equipment types, but the changes cannot be deployed until relevant stakeholders from operations, legal, compliance, and risk management have reviewed and approved them. Even under time pressure, this review process cannot be bypassed. What principle does this requirement reflect?
Question 15: A credit union is implementing an AI system and wants to clearly document who is responsible when the AI makes incorrect loan decisions that harm applicants. Before deployment, they need to establish formal frameworks assigning liability and decision ownership. What aspect of AI governance does this documentation address?
Question 16: A hospital is testing their diagnostic AI system and discovers it performs well on common conditions but fails dramatically when presented with rare diseases or unusual symptom combinations that were underrepresented in training data. What type of testing specifically identifies these boundary condition failures?
Question 17: A retail analytics company is preparing for deployment of their demand forecasting AI. The project team needs to ensure that business users, support staff, and operators understand both the capabilities and limitations of the system before it goes live. What deployment requirement addresses this need?
Question 18: An AI development team is in the earliest phase of building a fraud detection system. They need to define clear, measurable goals, set success criteria, and identify potential ethical concerns and bias risks before any technical work begins. What stage of the AI lifecycle are they in?
Question 19: A technology company discovers their AI chatbot has developed concerning behaviors that emerged after deployment but were not present during testing. The team needs to implement a temporary solution that filters problematic inputs and outputs while they develop a permanent fix. What technique should they implement?
Question 20: A manufacturing company operates industrial robots controlled by AI systems. An auditor reviewing their incident response capabilities wants to ensure they have processes specifically designed for AI system failures, which differ significantly from traditional IT incidents. What characteristic makes AI incidents fundamentally different from conventional IT issues?
Need Guaranteed Results?
Our exam support service guarantees you'll pass your AI Certification Exam on the first attempt. Pay only after you pass!
Get Exam Support