Position Saved
← Back to Dashboard

AWS Certified AI Practitioner (AIF-C01)
Practice Exam

🧠 AWS AIF-C01 Topic Mindmap

AWS AI PRACTITIONER
├── FUNDAMENTALS
│   ├── Concepts (GenAI vs ML vs DL)
│   ├── Terminology (Tokens, Embeddings, Temperature)
│   ├── Prompt Engineering (Zero-shot, Few-shot, CoT)
│   └── Model Types (Text, Image, Multi-modal)
├── GENERATIVE AI (BEDROCK)
│   ├── Foundation Models (Claude, Titan, Llama, Jurassic)
│   ├── Features (Agents, Knowledge Bases/RAG, Guardrails)
│   ├── Pricing (On-Demand vs Provisioned)
│   └── Security (Data Privacy, PrivateLink)
├── MACHINE LEARNING (SAGEMAKER)
│   ├── Low-Code (Canvas, JumpStart)
│   ├── Core (Training, Inference Types, Pipelines)
│   ├── Governance (Role Manager, Model Cards, Clarify)
│   └── Human Review (Augmented AI / A2I, Ground Truth)
├── AI SERVICES (HIGH LEVEL)
│   ├── Vision (Rekognition, Lookout for Vision)
│   ├── Speech/Text (Transcribe, Polly, Translate, Comprehend)
│   ├── Documents (Textract, Kendra)
│   ├── Industrial (Monitron, Lookout for Equipment)
│   ├── Health (HealthLake, Comprehend Medical)
│   └── Code/Dev (Amazon Q Developer, CodeGuru)
└── RESPONSIBLE AI & SECURITY
    ├── Bias & Fairness (Clarify)
    ├── Explainability (Why did it say that?)
    ├── Privacy/Security (VPC, IAM, Encryption)
    └── Governance (Model Cards for documentation)
            

📚 Key Official Resources

🔗 Amazon Bedrock User Guide - The core of the exam.

🔗 AWS Responsible AI - Crucial for governance/ethics questions.

🔗 SageMaker Canvas - Understand the "No-Code" Business Analyst persona.

🔗 Amazon Q - Know the difference between Q Business vs Q Developer.

1. Generative AI & Bedrock

Amazon Bedrock: Serverless service to build GenAI apps using FMs (Foundation Models). Access models via API.

Foundation Models (FMs): Pre-trained on vast data. Providers: AI21 Labs (Jurassic), Anthropic (Claude), Cohere (Command), Meta (Llama), Mistral, Stability AI (Stable Diffusion), Amazon (Titan).

Agents: Execute multi-step tasks by breaking them down and calling APIs.

Knowledge Bases: RAG (Retrieval Augmented Generation) - connect FM to your data sources for accurate answers.

Guardrails: Filter input/output content (hate speech, PII) responsibly.

2. Machine Learning Services (SageMaker & High Level)

SageMaker: Fully managed service to build, train, and deploy ML models. (Canvas = No Code, JumpStart = Pre-trained models).

Rekognition: Image/Video analysis (Objects, Faces, Content Moderation).

Polly: Text-to-Speech (TTS).

Transcribe: Speech-to-Text (ASR).

Translate: Language translation.

Comprehend: NLP (Sentiment analysis, Entity extraction).

Textract: Extract text/data from physical documents (OCR+).

Kendra: Enterprise search (Intelligent search).

3. Responsible AI & Governance

bias: Pre-training (data), Post-training (tuning). SageMaker Clarify detects bias.

Explainability: Understanding *why* a model made a prediction.

Privacy: Data sent to Bedrock is NOT used to train base models. Data stays in your VPC (PrivateLink).

HITL (Human in the loop): SageMaker Augmented AI (A2I) for human review of low-confidence predictions.

PRACTICE EXAM SET 1 (Questions 1-24)
Q1. Which AWS service provides the easiest way to access high-performing Foundation Models (FMs) from leading AI startups via a single API? (A) SageMaker (B) Bedrock (C) Q (D) Comprehend
Answer: B - Amazon Bedrock is the fully managed service for accessing FMs via API.
Q2. What is "RAG" in the context of Generative AI? (A) Random Access Generation (B) Retrieval-Augmented Generation (C) Rapid AI Growth (D) Robotic Auto Generator
Answer: B - RAG enhances model output by retrieving relevant info from authorized knowledge bases before generating a response.
Q3. A company wants to build an ML model without writing any code. (A) SageMaker Studio (B) SageMaker Canvas (C) SageMaker Notebooks (D) EC2 DLAMI
Answer: B - SageMaker Canvas provides a visual, no-code interface for building ML models.
Q4. You need to convert customer service call recordings into text for analysis. (A) Polly (B) Transcribe (C) Translate (D) Lex
Answer: B - Amazon Transcribe converts speech to text (ASR).
Q5. Which technique involves providing a few examples in the prompt to guide the model? (A) Zero-shot prompting (B) Few-shot prompting (C) Fine-tuning (D) Pre-training
Answer: B - Few-shot prompting enables in-context learning by providing examples within the prompt.
Q6. Which Bedrock feature ensures your application blocks harmful content and PII? (A) Agents (B) Knowledge Bases (C) Guardrails (D) Provisioned Throughput
Answer: C - Guardrails for Amazon Bedrock implements responsible AI policies to filter content.
Q7. What is the role of "Tokens" in pricing for LLMs? (A) API keys (B) Access currency (C) Units of text (input/output) processed (D) User sessions
Answer: C - GenAI models (like Bedrock) typically charge per 1,000 input tokens and 1,000 output tokens.
Q8. You need to detect sentiment (Positive/Negative) in product reviews. (A) Comprehend (B) Rekognition (C) Forecast (D) Personalize
Answer: A - Amazon Comprehend uses NLP to find insights like sentiment, entities, and key phrases in text.
Q9. Which parameter controls the "randomness" or "creativity" of an LLM's output? (A) Top-K (B) Temperature (C) Max Tokens (D) Stop sequence
Answer: B - Temperature controls randomness. Low temperature = deterministic/focused; High = creative/random.
Q10. A bank wants to detect potential fraud patterns in transaction data using ML. (A) Amazon Fraud Detector (B) Amazon Monitron (C) Amazon Lookout for Vision (D) Amazon Kendra
Answer: A - Amazon Fraud Detector is a fully managed service specific for detecting fraud.
Q11. Which service allows developers to build conversational interfaces (Chatbots)? (A) Polly (B) Connect (C) Lex (D) Textract
Answer: C - Amazon Lex builds conversational interfaces using voice and text (same tech as Alexa).
Q12. You want to extract key/value pairs and tables from scanned PDF invoices. (A) Rekognition (B) Textract (C) Transcribe (D) Translate
Answer: B - Textract goes beyond OCR to extract structured data (forms, tables) from documents.
Q13. What is "Fine-Tuning"? (A) Changing the API parameters (B) Adapting a pre-trained model with a smaller, domain-specific dataset (C) Building a model from scratch (D) Reducing costs
Answer: B - Fine-tuning adapts a generic FM to a specific task using your own labeled data.
Q14. How does Amazon Bedrock handle your data privacy? (A) It uses it to train public models (B) It sells it (C) Data is not used to improve base models and stays in your region (D) It is public
Answer: C - AWS guarantees that your data used in Bedrock is NOT used to train the public base foundation models.
Q15. Which SageMaker tool detects bias in your data and models? (A) SageMaker Debugger (B) SageMaker Clarify (C) SageMaker Pipelines (D) SageMaker Edge
Answer: B - Clarify provides bias detection (pre/post training) and model explainability (SHAP values).
Q16. Identify inappropriate content in images uploaded by users. (A) Rekognition Content Moderation (B) Comprehend (C) GuardDuty (D) Macie
Answer: A - Rekognition DetectModerationLabels identifies explicit or suggestive content in images/videos.
Q17. Which AI capability allows a system to generate *new* content like images or text? (A) Discriminative AI (B) Generative AI (C) Reinforcement Learning (D) Regression
Answer: B - Generative AI creates new content. Discriminative AI classifies existing data.
Q18. You need to create a search engine for your corporate documents that understands natural language questions. (A) OpenSearch (B) Kendra (C) RDS (D) Neptune
Answer: B - Amazon Kendra is an intelligent enterprise search service powered by ML.
Q19. Which vector database capability is available in Knowledge Bases for Bedrock? (A) Aurora Serverless (B) OpenSearch Serverless (C) DynamoDB (D) Redshift
Answer: B - Knowledge Bases for Bedrock manages the vector store, heavily utilizing OpenSearch Serverless.
Q20. What is "Hallucination" in LLMs? (A) The model crashes (B) The model generates confident but factually incorrect information (C) The model becomes sentient (D) The model refuses to answer
Answer: B - Hallucination is when an AI generates false info presented as fact. RAG helps fix this.
Q21. Amazon Q is primarily: (A) A database (B) An AI-powered assistant for businesses and developers (C) A storage service (D) A firewall
Answer: B - Amazon Q is a GenAI-powered assistant tailored for work (coding, business intelligence, AWS console help).
Q22. Which customized hardware does AWS offer for training ML models? (A) Graviton (B) Trainium (C) Inferentia (D) Nitro
Answer: B - AWS Trainium is purpose-built silicon for efficient ML training. (Inferentia is for inference).
Q23. Which deployment option is "Serverless" for SageMaker inference? (A) Real-time inference (B) Serverless Inference (C) Asynchronous Inference (D) Batch Transform
Answer: B - SageMaker Serverless Inference allows you to deploy models without managing underlying instances (pay for duration).
Q24. Automate the review of low-confidence predictions from a model by sending them to humans. (A) Ground Truth (B) Augmented AI (A2I) (C) Mechanical Turk (D) Step Functions
Answer: B - A2I (Augmented AI) makes it easy to build workflows for human review of ML predictions.
PRACTICE EXAM SET 2 (Questions 25-48)
Q25. Which service helps create high-quality training datasets by managing human labelers? (A) SageMaker Ground Truth (B) Glue (C) Macie (D) Inspector
Answer: A - Ground Truth helps build Training Data sets by labeling images/text/etc.
Q26. What is the benefit of "Provisioned Throughput" in Bedrock? (A) Lower cost for low usage (B) Guaranteed capacity for consistent performance (C) Free tier (D) Public access
Answer: B - Provisioned Throughput reserves model units for you, ensuring consistent performance for high-volume prod workloads.
Q27. You want to execute a sequence of actions: "Check inventory, if low, order more". How can an LLM do this? (A) RAG (B) Agents for Amazon Bedrock (C) Fine-tuning (D) Embeddings
Answer: B - Agents can determine intent and call APIs (like an inventory system) to perform actions.
Q28. Convert numbers into a vector format that represents their semantic meaning. (A) Tokenization (B) Embeddings (C) Parsing (D) Encryption
Answer: B - Embeddings models (like Titan Embeddings) convert text/data into numerical vectors.
Q29. Provide personalized product recommendations. (A) Amazon Personalize (B) Amazon Forecast (C) Connect (D) Pinpoint
Answer: A - Amazon Personalize uses ML to create real-time recommendations (same tech as Amazon.com).
Q30. Detect anomalies in industrial equipment using sensors. (A) Monitron / Lookout for Equipment (B) Rekognition (C) Transcribe (D) Translate
Answer: A - Amazon Monitron and Lookout for Equipment are primarily for Predictive Maintenance.
Q31. Which feature prevents an LLM from responding to "How to make a bomb"? (A) Context Window (B) Guardrails (C) Temperature (D) Top-P
Answer: B - Guardrails can be configured to block dangerous, hateful, or specific categories of content.
Q32. You need to deploy a model at the "Edge" (on a device). (A) SageMaker Neo / Edge Manager (B) CloudFront (C) Outposts (D) Fargate
Answer: A - SageMaker Neo compiles models to run on specific hardware/edge devices.
Q33. Which cost model is "Pay-per-token"? (A) SageMaker Training (B) Bedrock On-Demand (C) Bedrock Provisioned (D) EC2
Answer: B - Bedrock On-Demand pricing is based on input/output tokens processed.
Q34. Improve code quality and security scans in your IDE. (A) CodeWhisperer (now part of Amazon Q Developer) (B) CodeBuild (C) CodeDeploy (D) Cloud9
Answer: A - Amazon Q Developer (formerly CodeWhisperer) provides code suggestions and security scans.
Q35. What defines the "Context Window" of an LLM? (A) The screen size (B) The amount of information (tokens) the model can consider at once (C) The training time (D) The latency
Answer: B - A larger context window (e.g., 200k tokens in Claude 3) allows processing larger documents/inputs in one go.
Q36. Model evaluation metric "ROUGE" is typically used for: (A) Image classification (B) Text Summarization (C) Audio quality (D) Fraud detection
Answer: B - ROUGE is a standard metric for evaluating automatic summarization.
Q37. Which text capability helps identify "Who" or "What" is mentioned? (A) Key Phrase Extraction (B) Entity Recognition (NER) (C) Sentiment Analysis (D) Syntax Analysis
Answer: B - NER (Named Entity Recognition) identifies people, places, organizations, dates, etc.
Q38. What is "Prompt Engineering"? (A) Building physical servers (B) Designing inputs to guide GenAI models to desired outputs (C) Writing Python code (D) Database indexing
Answer: B - The art of crafting inputs (prompts) to get the best results from LLMs.
Q39. Access a foundation model privately within your VPC. (A) Internet Gateway (B) VPC Endpoint (PrivateLink) (C) VPN (D) NAT Gateway
Answer: B - Use VPC Endpoints (PrivateLink) to connect to Bedrock/SageMaker APIs without traversing public internet.
Q40. Which role is responsible for ensuring AI systems are fair and unbiased? (A) Cloud Architect (B) AI Ethicist / Governance Lead (C) SysAdmin (D) DevOps
Answer: B - While everyone plays a part, AI Governance leads specifically focus on responsible AI practices.
Q41. Forecast future sales based on time-series data. (A) Personalize (B) Forecast (C) Textract (D) Polly
Answer: B - Amazon Forecast is a time-series forecasting service.
Q42. Which service can automatically create a video from a text script? (A) Rekognition (B) Bedrock (with video models) / Elemental (C) Transcribe (D) Connect
Answer: B - Bedrock (via models like Titan Image/Video or stable diffusion video variants) can generate content.
Q43. Optimize the cost of SageMaker training jobs by using spare capacity. (A) On-Demand (B) Managed Spot Training (C) Reserved Instances (D) Dedicated Hosts
Answer: B - Managed Spot Training uses Spot instances to reduce training costs by up to 90%.
Q44. What is the "Temperature" setting of 0.0 likely to produce? (A) Highly random output (B) The most likely/deterministic output (C) No output (D) An error
Answer: B - 0 temperature makes the model deterministic (always picking the highest probability next token).
Q45. Analyze medical text for health information (PHI). (A) Comprehend Medical (B) Transcribe Medical (C) HealthLake (D) All of the above
Answer: D - These are all part of the AWS Health AI suite, but Comprehend Medical specifically extracts text entities from medical notes.
Q46. Which model type is "Titan"? (A) An AWS-built Foundation Model family (B) An open-source model (C) A database (D) A processor
Answer: A - Amazon Titan is AWS's own family of FMs (Text, Embeddings, Image).
Q47. Automatically draft a response to a customer email. (A) Discriminative Model (B) Generative Model (LLM) (C) Regression Model (D) Clustering
Answer: B - Generative models (LLMs) are ideal for content creation tasks like email drafting.
Q48. Which is a visual tool to orchestrate ML workflows (Data prep, train, deploy)? (A) SageMaker Pipelines (B) Step Functions (C) CodePipeline (D) Glue
Answer: A - SageMaker Pipelines is a purpose-built CI/CD service for ML.
PRACTICE EXAM SET 3 (Questions 49-74)
Q49. You need to enable your application to support conversational question answering based on *your* proprietary PDF manuals. (A) Train a new model (B) RAG (Retrieval Augmented Generation) (C) Fine-Tuning (D) Pre-training
Answer: B - RAG is the standard pattern for connecting an LLM to your own private data sources (Knowledge Base) without training.
Q50. A hospital needs to separate "PHI" (Personal Health Info) from other data in S3. (A) Macie (B) GuardDuty (C) Inspector (D) WAF
Answer: A - Macie is the service that automatically discovers, classifies, and protects sensitive data (PII/PHI) in S3.
Q51. Which Bedrock inference parameter limits the length of the generated response? (A) Stop Sequence (B) Max Tokens (C) Temperature (D) Top-P
Answer: B - "Max Tokens" (or Max Generation Length) defines the cutoff point for the generated output.
Q52. Business Analysts need to predict customer churn but don't know Python. (A) SageMaker Studio (B) SageMaker Canvas (C) SageMaker JumpStart (D) EC2
Answer: B - SageMaker Canvas is explicitly designed for "No-Code" users (Business Analysts) to build ML models visually.
Q53. You want to generate images of "flying cars" using Bedrock. Which model family should you choose? (A) Claude (B) Titan Image Generator / Stable Diffusion (C) Jurassic-2 (D) Command
Answer: B - Titan Image Generator and Stable Diffusion are "Text-to-Image" models. Claude/Jurassic/Command are "Text-to-Text".
Q54. Which service helps you improve your application's code security by scanning for hardcoded secrets? (A) Amazon Inspector (B) Amazon Q Developer (CodeWhisperer) (C) Macie (D) GuardDuty
Answer: B - Amazon Q Developer (formerly CodeWhisperer) has security scans that run in the IDE to find secrets/vulnerabilities.
Q55. What is the primary use case for "SageMaker JumpStart"? (A) Write code from scratch (B) One-click deploy of pre-trained models (C) Label data (D) Monitor bias
Answer: B - JumpStart provides a hub of pre-trained, open-source models (Hugging Face, etc.) that you can deploy with one click.
Q56. You need to verify if an FM is producing toxic content. Which evaluation method uses human reviewers? (A) Automatic Eval (B) Human Evaluation (C) Model Evaluation (D) Unit Testing
Answer: B - Human Evaluation involves real people reviewing the model's outputs for toxicity, accuracy, and style.
Q57. A call center wants to analyze calls to see *why* customers are calling (Topic modeling). (A) Transcribe + Comprehend (B) Polly + Translate (C) Connect + Rekognition (D) Lex
Answer: A - Transcribe turns audio to text. Comprehend then analyzes that text to extract topics and sentiment.
Q58. Which vector engine option for OpenSearch is "Serverless"? (A) Provisioned (B) OpenSearch Serverless (C) EC2 (D) Aurora
Answer: B - Bedrock Knowledge Bases use "Amazon OpenSearch Serverless" to store vector embeddings without managing clusters.
Q59. You need to ensure your AI model cards document the intended use and limitations. This aligns with which Responsible AI pillar? (A) Transparency/Explainability (B) Security (C) Privacy (D) Performance
Answer: A - "Transparency" involves documenting how the model works, its intended use, and limitations (Model Cards).
Q60. Improve the "Chain of Thought" reasoning of a model by asking it to: (A) "Answer quickly" (B) "Think step-by-step" (C) "Be creative" (D) "Ignore context"
Answer: B - The phrase "Think step-by-step" is the classic trigger for Chain-of-Thought (CoT) prompting to improve logic.
Q61. Which service is a fully managed "Vector Database" compatible with MongoDB? (A) DocumentDB (with vector search) (B) DynamoDB (C) Neptune (D) RDS
Answer: A - DocumentDB (and also Atlas) now supports Vector Search for JSON documents. (Though OpenSearch is the primary RAG choice).
Q62. What is "Watermarking" in the context of Titan Image Generator? (A) Adding a visible logo (B) Adding an invisible signature to identify AI-generated images (C) Protecting the API (D) Billing tag
Answer: B - Titan adds an invisible watermark to generated images to help identify them as AI-generated (part of Responsible AI).
Q63. A startup wants to use Stable Diffusion but doesn't want to manage servers. (A) EC2 P3 instances (B) Amazon Bedrock (C) SageMaker Training (D) ECS
Answer: B - Bedrock offers Stable Diffusion (Stability AI) as a serverless API. No servers to manage.
Q64. You need to summarize a 100-page legal document. Which model feature is critical? (A) Image generation (B) Large Context Window (C) Low latency (D) Speech output
Answer: B - A Large Context Window (e.g. 100k+ tokens) is required to fit a 100-page document into the prompt for summarization.
Q65. Which IAM policy action is required to invoke a Bedrock model? (A) bedrock:ListModels (B) bedrock:InvokeModel (C) s3:GetObject (D) sagemaker:CreateEndpoint
Answer: B - `bedrock:InvokeModel` is the specific permission needed to run inference on a model.
Q66. An airline wants a chatbot that can actually *book* tickets by connecting to their legacy booking system. (A) Bedrock Agents (B) Bedrock Knowledge Base (C) SageMaker Canvas (D) Amazon Connect
Answer: A - "Agents" are designed to execute multi-step tasks and interact with external systems (APIs/Lambda).
Q67. Minimize latency for a real-time fraud detection model. (A) Asynchronous Inference (B) Real-time Inference (C) Batch Transform (D) Serverless Inference
Answer: B - "Real-time Inference" (persistent endpoint) offers the lowest, most consistent latency (ms). Serverless has cold starts.
Q68. Which tool helps non-experts perform "Feature Engineering" on raw data? (A) SageMaker Data Wrangler (B) SageMaker Debugger (C) SageMaker Edge (D) Bedrock
Answer: A - SageMaker Data Wrangler provides a UI to import, visualize, clean, and feature-engineer data with little/no code.
Q69. A developer wants to swap out an AI model (e.g. Claude to Llama) with minimal code changes. (A) Bedrock (B) SageMaker (C) EC2 (D) Lambda
Answer: A - Bedrock provides a unified API. Swapping models is often just changing the `modelId` parameter.
Q70. Detect "Drift" in a deployed model (accuracy degrading over time). (A) SageMaker Model Monitor (B) SageMaker Clarify (C) CloudTrail (D) Config
Answer: A - SageMaker Model Monitor continuously monitors production endpoints for data drift or model quality drift.
Q71. Which pricing component applies to Provisioned Throughput? (A) Per Token (B) Hourly commitment for Model Units (C) Per User (D) Free
Answer: B - Provisioned Throughput requires purchasing "Model Units" for a committed duration (1 month, 6 months) -> Hourly cost.
Q72. Identify distinct speakers in an audio recording ("Speaker Diarization"). (A) Polly (B) Transcribe (C) Translate (D) Rekognition
Answer: B - Amazon Transcribe supports "Speaker Diarization" to label who said what (Speaker A, Speaker B).
Q73. Your Bedrock Agent needs to access a Lambda function. What defines the API schema? (A) OpenAPI Schema (Swagger) / Action Group (B) Python script (C) IAM Policy (D) S3 Bucket
Answer: A - Agents use an OpenAPI schema (JSON) to understand the inputs/outputs of the Action Group (Lambda).
Q74. Which specialized search service allows querying data using natural language questions? (A) CloudSearch (B) Kendra (C) ElasticSearch (D) Athena
Answer: B - Kendra is specifically built for "Semantic Search" (understanding intent), unlike keyword-based search.
PRACTICE EXAM SET 4 (Questions 75-100)
Q75. Company policy forbids sending PII to any AI model. How to enforce this centrally? (A) Bedrock Guardrails (B) Manual review (C) IAM Deny (D) VPC Endpoint
Answer: A - Bedrock Guardrails can be configured with a "Sensitive Information Filter" to block PII detection *before* it hits the model.
Q76. Amazon Q Business can connect to which data sources? (A) Only S3 (B) 40+ Enterprise connectors (Salesforce, ServiceNow, SharePoint, etc.) (C) Only Public Internet (D) Only DynamoDB
Answer: B - Amazon Q Business has built-in connectors for many enterprise data silos to power its chat RAG answer generation.
Q77. What is "In-Context Learning"? (A) Retraining the model (B) Including instructions/data/examples in the prompt itself (C) Updating weights (D) RAG
Answer: B - In-context learning relies on the prompt content (Context Window) to teach the model, without updating model weights.
Q78. "Top-P" parameter in inference refers to: (A) The top probability token (B) Nucleus Sampling (cumulative probability cutoff) (C) Penalty (D) Pricing
Answer: B - Top-P (Nucleus Sampling) restricts the token choice to the top subset of tokens whose cumulative probability equals P (e.g. 0.9).
Q79. Which service helps you conduct a POC (Proof of Concept) for GenAI quickly? (A) Bedrock Playgrounds (B) EC2 (C) SageMaker Training Job (D) EKS
Answer: A - Bedrock Console "Playgrounds" allow you to instantly test prompts against different models (Chat, Text, Image) via UI.
Q80. Evaluate the "Correctness" of an RAG application. (A) RAGAS / Model Evaluation (B) CloudWatch (C) Cost Explorer (D) Latency metrics
Answer: A - Model Evaluation (in Bedrock) or frameworks like RAGAS measure "Faithfulness", "Answer Relevance", and "Context Recall".
Q81. You need to run a sentiment analysis job on 10 million documents overnight. (A) Real-time Endpoint (B) Asynchronous Inference (C) Batch Transform (D) Serverless Inference
Answer: C - Batch Transform is designed for offline processing of large datasets where latency doesn't matter.
Q82. Which model is "Multi-Modal"? (A) Titan Text Embeddings (B) Claude 3 Sonnet (C) Polly (D) Translate
Answer: B - Claude 3 (and Titan Multimodal Embeddings) can understand/process both Text AND Images (Vision).
Q83. Reduce the size and latency of a model for mobile deployment without losing much accuracy. (A) Pruning / Quantization (SageMaker Neo) (B) Training longer (C) Increasing layers (D) RAG
Answer: A - Quantization (reducing precision from float32 to int8) and Pruning are compilation techniques used by SageMaker Neo.
Q84. Security team requires all Bedrock API calls to be logged for audit. (A) CloudTrail (B) CloudWatch Metrics (C) VPC Flow Logs (D) GuardDuty
Answer: A - CloudTrail logs the API activity (who, when, what) for `InvokeModel` management events (and data events if enabled).
Q85. What is the "Landing Zone" for an Agent? (A) Where it saves files (B) The prompt (C) There isn't one (D) The final response
Answer: C - Agents don't have a "Landing Zone". (Trick question concepts from Control Tower or S3). Agents have "Action Groups".
Q86. Provide a natural language interface for querying a SQL database (Text-to-SQL). (A) Bedrock / Q (B) RDS (C) DynamoDB (D) ElastiCache
Answer: A - Generative AI (Bedrock Knowledge Bases or simple prompt engineering) enables Text-to-SQL generation.
Q87. Which feature allows you to "Continuously Pre-train" a model with unlabeled data? (A) Fine-tuning (B) Continued Pre-training (Bedrock custom models) (C) RAG (D) Agents
Answer: B - Continued Pre-training (Domain Adaptation) uses huge amounts of *unlabeled* data to teach the model a new domain (e.g. medical/financial jargon).
Q88. What is the role of the "Orchestrator" in a Bedrock Agent? (A) It runs the Lambda (B) It breaks down the user request into logical steps (Chain-of-Thought) (C) It stores data (D) It pays the bill
Answer: B - The fully managed Agent Orchestrator analyzes the prompt, breaks it into tasks, and decides which Action Group/KB to call.
Q89. Which AWS service offers "Clinical" NLP features? (A) Amazon Comprehend Medical (B) Amazon HealthLake (C) Both A and B (D) None
Answer: C - Comprehend Medical extracts entities. HealthLake stores/queries FHIR data and uses NLP integrated inside it.
Q90. Your model output is cut off in mid-sentence. Why? (A) Temperature too high (B) Max Tokens limit reached (C) Invalid API key (D) Internet failure
Answer: B - If the generation hits the `max_tokens` (or `max_gen_len`) limit, it stops abruptly.
Q91. Can you use Amazon Bedrock in a "Disconnected" (Offline) environment? (A) Yes, download the model (B) No, it's a managed cloud service (C) Only with Outposts (D) Only with Snowball
Answer: B - Bedrock is a regional, cloud-based API service. You cannot download the weights to run offline (unlike SageMaker JumpStart where you might).
Q92. What is "Prompt Injection"? (A) A speed optimization (B) A security attack where malicious inputs trick the model provided (C) A database query (D) A deployment method
Answer: B - Prompt Injection is an adversarial attack to override the system instructions effectively "hijacking" the model.
Q93. A developer needs to extract data from a scanned driver's license. (A) Textract AnalyzeID (B) Rekognition (C) Transcribe (D) Translate
Answer: A - Textract has a specialized API `AnalyzeID` for identity documents (Passports, Licenses).
Q94. You are using Amazon Q within the AWS Console. What can it do? (A) Troubleshoot errors (B) Suggest infrastructure (C) Chat about documentation (D) All of the above
Answer: D - Amazon Q in the console is an expert assistant for troubleshooting, architecture suggestions, and docs lookup.
Q95. Which technique reduces the risk of "Model Hallucination"? (A) Increasing Temperature (B) RAG (Retrieval Augmented Generation) (C) Decreasing Max Tokens (D) Zero-shot
Answer: B - RAG grounds the model in factual data retrieved from a trusted source, significantly reducing hallucinations.
Q96. What is the inputs and outputs format for most Bedrock models? (A) SQL (B) JSON (C) XML (D) Binary
Answer: B - The `InvokeModel` API expects a JSON payload (prompt, parameters) and returns a JSON response.
Q97. Which service is best for "Predictive Maintenance"? (A) Amazon Monitron (B) Amazon Personalize (C) Amazon Lex (D) Amazon Poly
Answer: A - Monitron (hardware sensors + service) allows detecting abnormal machine behavior (vibration/temp) to predict failures.
Q98. A data scientist wants to share their ML model with other teams securely. (A) SageMaker Model Registry (B) Email (C) S3 public bucket (D) USB drive
Answer: A - SageMaker Model Registry tracks model versions, approval status, and metadata for governance and sharing.
Q99. Which AI concept involves "Giving the model a persona"? (A) System Prompting (B) Temperature (C) Embeddings (D) Fine-tuning
Answer: A - System Prompts (or System Instructions) set the behavior/persona of the model (e.g. "You are a helpful banking assistant").
Q100. Why use Amazon Bedrock "Batch Inference"? (A) For real-time chat (B) To process large volumes of prompts asynchronously at lower cost/urgency (C) For higher latency (D) For video streaming
Answer: B - Batch Inference is for processing millions of prompts (e.g. summarizing huge archives) where immediate response isn't needed.
END OF EXAM