Free AWS Certified AI Practitioner Mock Exam #1
| question.id | Questions |
|---|---|
| question_th |
Q1:
Chapter: - Topic #1
A company makes forecasts each quarter to decide how to optimize operations to meet expected demand. The company uses ML models to make these forecasts.An AI practitioner is writing a report about the trained ML models to provide transparency and explainability to company stakeholders.What should the AI practitioner include in the report to meet the transparency and explainability requirements?
A
Code for model training
B
Partial dependence plots (PDPs)
C
Sample data for training
D
Model convergence tables
Correct Answer:
B
|
| question_th |
Q2:
Chapter: - Topic #1
A law firm wants to build an AI application by using large language models (LLMs). The application will read legal documents and extract key points from the documents.Which solution meets these requirements?
A
Build an automatic named entity recognition system.
B
Create a recommendation engine.
C
Develop a summarization chatbot.
D
Develop a multi-language translation system.
Correct Answer:
C
|
| question_th |
Q3:
Chapter: - Topic #1
A company wants to classify human genes into 20 categories based on gene characteristics. The company needs an ML algorithm to document how the inner mechanism of the model affects the output.Which ML algorithm meets these requirements?
A
Decision trees
B
Linear regression
C
Logistic regression
D
Neural networks
Correct Answer:
A
|
| question_th |
Q4:
Chapter: - Topic #1
A company has built an image classification model to predict plant diseases from photos of plant leaves. The company wants to evaluate how many images the model classified correctly.Which evaluation metric should the company use to measure the model's performance?
A
R-squared score
B
Accuracy
C
Root mean squared error (RMSE)
D
Learning rate
Correct Answer:
B
|
| question_th |
Q5:
Chapter: - Topic #1
A company is using a pre-trained large language model (LLM) to build a chatbot for product recommendations. The company needs the LLM outputs to be short and written in a specific language.Which solution will align the LLM response quality with the company's expectations?
A
Adjust the prompt.
B
Choose an LLM of a different size.
C
Increase the temperature.
D
Increase the Top K value.
Correct Answer:
A
|
| question_th |
Q6:
Chapter: - Topic #1
A company uses Amazon SageMaker for its ML pipeline in a production environment. The company has large input data sizes up to 1 GB and processing times up to 1 hour. The company needs near real-time latency.Which SageMaker inference option meets these requirements?
A
Real-time inference
B
Serverless inference
C
Asynchronous inference
D
Batch transform
Correct Answer:
C
|
| question_th |
Q7:
Chapter: - Topic #1
A company is using domain-specific models. The company wants to avoid creating new models from the beginning. The company instead wants to adapt pre-trained models to create models for new, related tasks.Which ML strategy meets these requirements?
A
Increase the number of epochs.
B
Use transfer learning.
C
Decrease the number of epochs.
D
Use unsupervised learning.
Correct Answer:
B
|
| question_th |
Q8:
Chapter: - Topic #1
A company is building a solution to generate images for protective eyewear. The solution must have high accuracy and must minimize the risk of incorrect annotations.Which solution will meet these requirements?
A
Human-in-the-loop validation by using Amazon SageMaker Ground Truth Plus
B
Data augmentation by using an Amazon Bedrock knowledge base
C
Image recognition by using Amazon Rekognition
D
Data summarization by using Amazon QuickSight Q
Correct Answer:
A
|
| question_th |
Q9:
Chapter: - Topic #1
A company wants to create a chatbot by using a foundation model (FM) on Amazon Bedrock. The FM needs to access encrypted data that is stored in an Amazon S3 bucket. The data is encrypted with Amazon S3 managed keys (SSE-S3).The FM encounters a failure when attempting to access the S3 bucket data.Which solution will meet these requirements?
A
Ensure that the role that Amazon Bedrock assumes has permission to decrypt data with the correct encryption key.
B
Set the access permissions for the S3 buckets to allow public access to enable access over the internet.
C
Use prompt engineering techniques to tell the model to look for information in Amazon S3.
D
Ensure that the S3 data does not contain sensitive information.
Correct Answer:
A
|
| question_th |
Q10:
Chapter: - Topic #1
A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency possible.Which solution will meet these requirements?
A
Deploy optimized small language models (SLMs) on edge devices.
B
Deploy optimized large language models (LLMs) on edge devices.
C
Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.
D
Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.
Correct Answer:
A
|
| question_th |
Q11:
Chapter: - Topic #1
A company wants to build an ML model by using Amazon SageMaker. The company needs to share and manage variables for model development across multiple teams.Which SageMaker feature meets these requirements?
A
Amazon SageMaker Feature Store
B
Amazon SageMaker Data Wrangler
C
Amazon SageMaker Clarify
D
Amazon SageMaker Model Cards
Correct Answer:
A
|
| question_th |
Q12:
Chapter: - Topic #1
A company wants to use generative AI to increase developer productivity and software development. The company wants to use Amazon Q Developer.What can Amazon Q Developer do to help the company meet these requirements?
A
Create software snippets, reference tracking, and open source license tracking.
B
Run an application without provisioning or managing servers.
C
Enable voice commands for coding and providing natural language search.
D
Convert audio files to text documents by using ML models.
Correct Answer:
A
|
| question_th |
Q13:
Chapter: - Topic #1
A financial institution is using Amazon Bedrock to develop an AI application. The application is hosted in a VPC. To meet regulatory compliance standards, the VPC is not allowed access to any internet traffic.Which AWS service or feature will meet these requirements?
A
AWS PrivateLink
B
Amazon Macie
C
Amazon CloudFront
D
Internet gateway
Correct Answer:
A
|
| question_th |
Q14:
Chapter: - Topic #1
A company wants to develop an educational game where users answer questions such as the following: "A jar contains six red, four green, and three yellow marbles. What is the probability of choosing a green marble from the jar?"Which solution meets these requirements with the LEAST operational overhead?
A
Use supervised learning to create a regression model that will predict probability.
B
Use reinforcement learning to train a model to return the probability.
C
Use code that will calculate probability by using simple rules and computations.
D
Use unsupervised learning to create a model that will estimate probability density.
Correct Answer:
C
|
| question_th |
Q15:
Chapter: - Topic #1
Which metric measures the runtime efficiency of operating AI models?
A
Customer satisfaction score (CSAT)
B
Training time for each epoch
C
Average response time
D
Number of training instances
Correct Answer:
C
|
| question_th |
Q16:
Chapter: - Topic #1
A company is building a contact center application and wants to gain insights from customer conversations. The company wants to analyze and extract key information from the audio of the customer calls.Which solution meets these requirements?
A
Build a conversational chatbot by using Amazon Lex.
B
Transcribe call recordings by using Amazon Transcribe.
C
Extract information from call recordings by using Amazon SageMaker Model Monitor.
D
Create classification labels by using Amazon Comprehend.
Correct Answer:
B
|
| question_th |
Q17:
Chapter: - Topic #1
A company has petabytes of unlabeled customer data to use for an advertisement campaign. The company wants to classify its customers into tiers to advertise and promote the company's products.Which methodology should the company use to meet these requirements?
A
Supervised learning
B
Unsupervised learning
C
Reinforcement learning
D
Reinforcement learning from human feedback (RLHF)
Correct Answer:
B
|
| question_th |
Q18:
Chapter: - Topic #1
An AI practitioner wants to use a foundation model (FM) to design a search application. The search application must handle queries that have text and images.Which type of FM should the AI practitioner use to power the search application?
A
Multi-modal embedding model
B
Text embedding model
C
Multi-modal generation model
D
Image generation model
Correct Answer:
A
|
| question_th |
Q19:
Chapter: - Topic #1
A company uses a foundation model (FM) from Amazon Bedrock for an AI search tool. The company wants to fine-tune the model to be more accurate by using the company's data.Which strategy will successfully fine-tune the model?
A
Provide labeled data with the prompt field and the completion field.
B
Prepare the training dataset by creating a .txt file that contains multiple lines in .csv format.
C
Purchase Provisioned Throughput for Amazon Bedrock.
D
Train the model on journals and textbooks.
Correct Answer:
A
|
| question_th |
Q20:
Chapter: - Topic #1
A company wants to use AI to protect its application from threats. The AI solution needs to check if an IP address is from a suspicious source.Which solution meets these requirements?
A
Build a speech recognition system.
B
Create a natural language processing (NLP) named entity recognition system.
C
Develop an anomaly detection system.
D
Create a fraud forecasting system.
Correct Answer:
C
|
| question_th |
Q21:
Chapter: - Topic #1
Which feature of Amazon OpenSearch Service gives companies the ability to build vector database applications?
A
Integration with Amazon S3 for object storage
B
Support for geospatial indexing and queries
C
Scalable index management and nearest neighbor search capability
D
Ability to perform real-time analysis on streaming data
Correct Answer:
C
|
| question_th |
Q22:
Chapter: - Topic #1
Which option is a use case for generative AI models?
A
Improving network security by using intrusion detection systems
B
Creating photorealistic images from text descriptions for digital marketing
C
Enhancing database performance by using optimized indexing
D
Analyzing financial data to forecast stock market trends
Correct Answer:
B
|
| question_th |
Q23:
Chapter: - Topic #1
A company wants to build a generative AI application by using Amazon Bedrock and needs to choose a foundation model (FM). The company wants to know how much information can fit into one prompt.Which consideration will inform the company's decision?
A
Temperature
B
Context window
C
Batch size
D
Model size
Correct Answer:
B
|
| question_th |
Q24:
Chapter: - Topic #1
A company wants to make a chatbot to help customers. The chatbot will help solve technical problems without human intervention.The company chose a foundation model (FM) for the chatbot. The chatbot needs to produce responses that adhere to company tone.Which solution meets these requirements?
A
Set a low limit on the number of tokens the FM can produce.
B
Use batch inferencing to process detailed responses.
C
Experiment and refine the prompt until the FM produces the desired responses.
D
Define a higher number for the temperature parameter.
Correct Answer:
C
|
| question_th |
Q25:
Chapter: - Topic #1
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company wants to classify the sentiment of text passages as positive or negative.Which prompt engineering strategy meets these requirements?
A
Provide examples of text passages with corresponding positive or negative labels in the prompt followed by the new text passage to be classified.
B
Provide a detailed explanation of sentiment analysis and how LLMs work in the prompt.
C
Provide the new text passage to be classified without any additional context or examples.
D
Provide the new text passage with a few examples of unrelated tasks, such as text summarization or question answering.
Correct Answer:
A
|
| question_th |
Q26:
Chapter: - Topic #1
A security company is using Amazon Bedrock to run foundation models (FMs). The company wants to ensure that only authorized users invoke the models. The company needs to identify any unauthorized access attempts to set appropriate AWS Identity and Access Management (IAM) policies and roles for future iterations of the FMs.Which AWS service should the company use to identify unauthorized users that are trying to access Amazon Bedrock?
A
AWS Audit Manager
B
AWS CloudTrail
C
Amazon Fraud Detector
D
AWS Trusted Advisor
Correct Answer:
B
|
| question_th |
Q27:
Chapter: - Topic #1
A company has developed an ML model for image classification. The company wants to deploy the model to production so that a web application can use the model.The company needs to implement a solution to host the model and serve predictions without managing any of the underlying infrastructure.Which solution will meet these requirements?
A
Use Amazon SageMaker Serverless Inference to deploy the model.
B
Use Amazon CloudFront to deploy the model.
C
Use Amazon API Gateway to host the model and serve predictions.
D
Use AWS Batch to host the model and serve predictions.
Correct Answer:
A
|
| question_th |
Q28:
Chapter: - Topic #1
An AI company periodically evaluates its systems and processes with the help of independent software vendors (ISVs). The company needs to receive email message notifications when an ISV's compliance reports become available.Which AWS service can the company use to meet this requirement?
A
AWS Audit Manager
B
AWS Artifact
C
AWS Trusted Advisor
D
AWS Data Exchange
Correct Answer:
B
|
| question_th |
Q29:
Chapter: - Topic #1
A company wants to use a large language model (LLM) to develop a conversational agent. The company needs to prevent the LLM from being manipulated with common prompt engineering techniques to perform undesirable actions or expose sensitive information.Which action will reduce these risks?
A
Create a prompt template that teaches the LLM to detect attack patterns.
B
Increase the temperature parameter on invocation requests to the LLM.
C
Avoid using LLMs that are not listed in Amazon SageMaker.
D
Decrease the number of input tokens on invocations of the LLM.
Correct Answer:
A
|
| question_th |
Q30:
Chapter: - Topic #1
A company is using the Generative AI Security Scoping Matrix to assess security responsibilities for its solutions. The company has identified four different solution scopes based on the matrix.Which solution scope gives the company the MOST ownership of security responsibilities?
A
Using a third-party enterprise application that has embedded generative AI features.
B
Building an application by using an existing third-party generative AI foundation model (FM).
C
Refining an existing third-party generative AI foundation model (FM) by fine-tuning the model by using data specific to the business.
D
Building and training a generative AI model from scratch by using specific data that a customer owns.
Correct Answer:
D
|
| question_th |
Q31:
Chapter: - Topic #1
An AI practitioner has a database of animal photos. The AI practitioner wants to automatically identify and categorize the animals in the photos without manual human effort.Which strategy meets these requirements?
A
Object detection
B
Anomaly detection
C
Named entity recognition
D
Inpainting
Correct Answer:
A
|
| question_th |
Q32:
Chapter: - Topic #1
A company wants to create an application by using Amazon Bedrock. The company has a limited budget and prefers flexibility without long-term commitment.Which Amazon Bedrock pricing model meets these requirements?
A
On-Demand
B
Model customization
C
Provisioned Throughput
D
Spot Instance
Correct Answer:
A
|
| question_th |
Q33:
Chapter: - Topic #1
Which AWS service or feature can help an AI development team quickly deploy and consume a foundation model (FM) within the team's VPC?
A
Amazon Personalize
B
Amazon SageMaker JumpStart
C
PartyRock, an Amazon Bedrock Playground
D
Amazon SageMaker endpoints
Correct Answer:
B
|
| question_th |
Q34:
Chapter: - Topic #1
How can companies use large language models (LLMs) securely on Amazon Bedrock?
A
Design clear and specific prompts. Configure AWS Identity and Access Management (IAM) roles and policies by using least privilege access.
B
Enable AWS Audit Manager for automatic model evaluation jobs.
C
Enable Amazon Bedrock automatic model evaluation jobs.
D
Use Amazon CloudWatch Logs to make models explainable and to monitor for bias.
Correct Answer:
A
|
| question_th |
Q35:
Chapter: - Topic #1
A company has terabytes of data in a database that the company can use for business analysis. The company wants to build an AI-based application that can build a SQL query from input text that employees provide. The employees have minimal experience with technology.Which solution meets these requirements?
A
Generative pre-trained transformers (GPT)
B
Residual neural network
C
Support vector machine
D
WaveNet
Correct Answer:
A
|
| question_th |
Q36:
Chapter: - Topic #1
A company built a deep learning model for object detection and deployed the model to production.Which AI process occurs when the model analyzes a new image to identify objects?
A
Training
B
Inference
C
Model deployment
D
Bias correction
Correct Answer:
B
|
| question_th |
Q37:
Chapter: - Topic #1
An AI practitioner is building a model to generate images of humans in various professions. The AI practitioner discovered that the input data is biased and that specific attributes affect the image generation and create bias in the model.Which technique will solve the problem?
A
Data augmentation for imbalanced classes
B
Model monitoring for class distribution
C
Retrieval Augmented Generation (RAG)
D
Watermark detection for images
Correct Answer:
A
|
| question_th |
Q38:
Chapter: - Topic #1
A company is implementing the Amazon Titan foundation model (FM) by using Amazon Bedrock. The company needs to supplement the model by using relevant data from the company's private data sources.Which solution will meet this requirement?
A
Use a different FM.
B
Choose a lower temperature value.
C
Create an Amazon Bedrock knowledge base.
D
Enable model invocation logging.
Correct Answer:
C
|
| question_th |
Q39:
Chapter: - Topic #1
A medical company is customizing a foundation model (FM) for diagnostic purposes. The company needs the model to be transparent and explainable to meet regulatory requirements.Which solution will meet these requirements?
A
Configure the security and compliance by using Amazon Inspector.
B
Generate simple metrics, reports, and examples by using Amazon SageMaker Clarify.
C
Encrypt and secure training data by using Amazon Macie.
D
Gather more data. Use Amazon Rekognition to add custom labels to the data.
Correct Answer:
B
|
| question_th |
Q40:
Chapter: - Topic #1
A company wants to deploy a conversational chatbot to answer customer questions. The chatbot is based on a fine-tuned Amazon SageMaker JumpStart model. The application must comply with multiple regulatory frameworks.Which capabilities can the company show compliance for? (Choose two.)
A
Auto scaling inference endpoints
B
Threat detection
C
Data protection
D
Cost optimization
E
Loosely coupled microservices
Correct Answer:
B
C
|
| question_th |
Q41:
Chapter: - Topic #1
A company is training a foundation model (FM). The company wants to increase the accuracy of the model up to a specific acceptance level.Which solution will meet these requirements?
A
Decrease the batch size.
B
Increase the epochs.
C
Decrease the epochs.
D
Increase the temperature parameter.
Correct Answer:
B
|
| question_th |
Q42:
Chapter: - Topic #1
A company is building a large language model (LLM) question answering chatbot. The company wants to decrease the number of actions call center employees need to take to respond to customer questions.Which business objective should the company use to evaluate the effect of the LLM chatbot?
A
Website engagement rate
B
Average call duration
C
Corporate social responsibility
D
Regulatory compliance
Correct Answer:
B
|
| question_th |
Q43:
Chapter: - Topic #1
Which functionality does Amazon SageMaker Clarify provide?
A
Integrates a Retrieval Augmented Generation (RAG) workflow
B
Monitors the quality of ML models in production
C
Documents critical details about ML models
D
Identifies potential bias during data preparation
Correct Answer:
D
|
| question_th |
Q44:
Chapter: - Topic #1
A company is developing a new model to predict the prices of specific items. The model performed well on the training dataset. When the company deployed the model to production, the model's performance decreased significantly.What should the company do to mitigate this problem?
A
Reduce the volume of data that is used in training.
B
Add hyperparameters to the model.
C
Increase the volume of data that is used in training.
D
Increase the model training time.
Correct Answer:
C
|
| question_th |
Q45:
Chapter: - Topic #1
An ecommerce company wants to build a solution to determine customer sentiments based on written customer reviews of products.Which AWS services meet these requirements? (Choose two.)
A
Amazon Lex
B
Amazon Comprehend
C
Amazon Polly
D
Amazon Bedrock
E
Amazon Rekognition
Correct Answer:
B
D
|
| question_th |
Q46:
Chapter: - Topic #1
A company wants to use large language models (LLMs) with Amazon Bedrock to develop a chat interface for the company's product manuals. The manuals are stored as PDF files.Which solution meets these requirements MOST cost-effectively?
A
Use prompt engineering to add one PDF file as context to the user prompt when the prompt is submitted to Amazon Bedrock.
B
Use prompt engineering to add all the PDF files as context to the user prompt when the prompt is submitted to Amazon Bedrock.
C
Use all the PDF documents to fine-tune a model with Amazon Bedrock. Use the fine-tuned model to process user prompts.
D
Upload PDF documents to an Amazon Bedrock knowledge base. Use the knowledge base to provide context when users submit prompts to Amazon Bedrock.
Correct Answer:
D
|
| question_th |
Q47:
Chapter: - Topic #1
A social media company wants to use a large language model (LLM) for content moderation. The company wants to evaluate the LLM outputs for bias and potential discrimination against specific groups or individuals.Which data source should the company use to evaluate the LLM outputs with the LEAST administrative effort?
A
User-generated content
B
Moderation logs
C
Content moderation guidelines
D
Benchmark datasets
Correct Answer:
D
|
| question_th |
Q48:
Chapter: - Topic #1
A company wants to use a pre-trained generative AI model to generate content for its marketing campaigns. The company needs to ensure that the generated content aligns with the company's brand voice and messaging requirements.Which solution meets these requirements?
A
Optimize the model's architecture and hyperparameters to improve the model's overall performance.
B
Increase the model's complexity by adding more layers to the model's architecture.
C
Create effective prompts that provide clear instructions and context to guide the model's generation.
D
Select a large, diverse dataset to pre-train a new generative model.
Correct Answer:
C
|
| question_th |
Q49:
Chapter: - Topic #1
A loan company is building a generative AI-based solution to offer new applicants discounts based on specific business criteria. The company wants to build and use an AI model responsibly to minimize bias that could negatively affect some customers.Which actions should the company take to meet these requirements? (Choose two.)
A
Detect imbalances or disparities in the data.
B
Ensure that the model runs frequently.
C
Evaluate the model's behavior so that the company can provide transparency to stakeholders.
D
Use the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) technique to ensure that the model is 100% accurate.
E
Ensure that the model's inference time is within the accepted limits.
Correct Answer:
A
C
|
| question_th |
Q50:
Chapter: - Topic #1
A company is using an Amazon Bedrock base model to summarize documents for an internal use case. The company trained a custom model to improve the summarization quality.Which action must the company take to use the custom model through Amazon Bedrock?
A
Purchase Provisioned Throughput for the custom model.
B
Deploy the custom model in an Amazon SageMaker endpoint for real-time inference.
C
Register the model with the Amazon SageMaker Model Registry.
D
Grant access to the custom model in Amazon Bedrock.
Correct Answer:
A
|
| question_th |
Q51:
Chapter: - Topic #1
A company needs to choose a model from Amazon Bedrock to use internally. The company must identify a model that generates responses in a style that the company's employees prefer.What should the company do to meet these requirements?
A
Evaluate the models by using built-in prompt datasets.
B
Evaluate the models by using a human workforce and custom prompt datasets.
C
Use public model leaderboards to identify the model.
D
Use the model InvocationLatency runtime metrics in Amazon CloudWatch when trying models.
Correct Answer:
B
|
| question_th |
Q52:
Chapter: - Topic #1
A student at a university is copying content from generative AI to write essays.Which challenge of responsible generative AI does this scenario represent?
A
Toxicity
B
Hallucinations
C
Plagiarism
D
Privacy
Correct Answer:
C
|
| question_th |
Q53:
Chapter: - Topic #1
A company needs to build its own large language model (LLM) based on only the company's private data. The company is concerned about the environmental effect of the training process.Which Amazon EC2 instance type has the LEAST environmental effect when training LLMs?
A
Amazon EC2 C series
B
Amazon EC2 G series
C
Amazon EC2 P series
D
Amazon EC2 Trn series
Correct Answer:
D
|
| question_th |
Q54:
Chapter: - Topic #1
A company wants to build an interactive application for children that generates new stories based on classic stories. The company wants to use Amazon Bedrock and needs to ensure that the results and topics are appropriate for children.Which AWS service or feature will meet these requirements?
A
Amazon Rekognition
B
Amazon Bedrock playgrounds
C
Guardrails for Amazon Bedrock
D
Agents for Amazon Bedrock
Correct Answer:
C
|
| question_th |
Q55:
Chapter: - Topic #1
A company is building an application that needs to generate synthetic data that is based on existing data.Which type of model can the company use to meet this requirement?
A
Generative adversarial network (GAN)
B
XGBoost
C
Residual neural network
D
WaveNet
Correct Answer:
A
|
| question_th |
Q56:
Chapter: - Topic #1
A digital devices company wants to predict customer demand for memory hardware. The company does not have coding experience or knowledge of ML algorithms and needs to develop a data-driven predictive model. The company needs to perform analysis on internal data and external data.Which solution will meet these requirements?
A
Store the data in Amazon S3. Create ML models and demand forecast predictions by using Amazon SageMaker built-in algorithms that use the data from Amazon S3.
B
Import the data into Amazon SageMaker Data Wrangler. Create ML models and demand forecast predictions by using SageMaker built-in algorithms.
C
Import the data into Amazon SageMaker Data Wrangler. Build ML models and demand forecast predictions by using an Amazon Personalize Trending-Now recipe.
D
Import the data into Amazon SageMaker Canvas. Build ML models and demand forecast predictions by selecting the values in the data from SageMaker Canvas.
Correct Answer:
D
|
| question_th |
Q57:
Chapter: - Topic #1
A company has installed a security camera. The company uses an ML model to evaluate the security camera footage for potential thefts. The company has discovered that the model disproportionately flags people who are members of a specific ethnic group.Which type of bias is affecting the model output?
A
Measurement bias
B
Sampling bias
C
Observer bias
D
Confirmation bias
Correct Answer:
B
|
| question_th |
Q58:
Chapter: - Topic #1
A company is building a customer service chatbot. The company wants the chatbot to improve its responses by learning from past interactions and online resources.Which AI learning strategy provides this self-improvement capability?
A
Supervised learning with a manually curated dataset of good responses and bad responses
B
Reinforcement learning with rewards for positive customer feedback
C
Unsupervised learning to find clusters of similar customer inquiries
D
Supervised learning with a continuously updated FAQ database
Correct Answer:
B
|
| question_th |
Q59:
Chapter: - Topic #1
An AI practitioner has built a deep learning model to classify the types of materials in images. The AI practitioner now wants to measure the model performance.Which metric will help the AI practitioner evaluate the performance of the model?
A
Confusion matrix
B
Correlation matrix
C
R2 score
D
Mean squared error (MSE)
Correct Answer:
A
|
| question_th |
Q60:
Chapter: - Topic #1
A company has built a chatbot that can respond to natural language questions with images. The company wants to ensure that the chatbot does not return inappropriate or unwanted images.Which solution will meet these requirements?
A
Implement moderation APIs.
B
Retrain the model with a general public dataset.
C
Perform model validation.
D
Automate user feedback integration.
Correct Answer:
A
|
| question_th |
Q61:
Chapter: - Topic #1
An AI practitioner is using an Amazon Bedrock base model to summarize session chats from the customer service department. The AI practitioner wants to store invocation logs to monitor model input and output data.Which strategy should the AI practitioner use?
A
Configure AWS CloudTrail as the logs destination for the model.
B
Enable invocation logging in Amazon Bedrock.
C
Configure AWS Audit Manager as the logs destination for the model.
D
Configure model invocation logging in Amazon EventBridge.
Correct Answer:
B
|
| question_th |
Q62:
Chapter: - Topic #1
A company is building an ML model to analyze archived data. The company must perform inference on large datasets that are multiple GBs in size. The company does not need to access the model predictions immediately.Which Amazon SageMaker inference option will meet these requirements?
A
Batch transform
B
Real-time inference
C
Serverless inference
D
Asynchronous inference
Correct Answer:
A
|
| question_th |
Q63:
Chapter: - Topic #1
Which term describes the numerical representations of real-world objects and concepts that AI and natural language processing (NLP) models use to improve understanding of textual information?
A
Embeddings
B
Tokens
C
Models
D
Binaries
Correct Answer:
A
|
| question_th |
Q64:
Chapter: - Topic #1
A research company implemented a chatbot by using a foundation model (FM) from Amazon Bedrock. The chatbot searches for answers to questions from a large database of research papers.After multiple prompt engineering attempts, the company notices that the FM is performing poorly because of the complex scientific terms in the research papers.How can the company improve the performance of the chatbot?
A
Use few-shot prompting to define how the FM can answer the questions.
B
Use domain adaptation fine-tuning to adapt the FM to complex scientific terms.
C
Change the FM inference parameters.
D
Clean the research paper data to remove complex scientific terms.
Correct Answer:
B
|
| question_th |
Q65:
Chapter: - Topic #1
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company needs the LLM to produce more consistent responses to the same input prompt.Which adjustment to an inference parameter should the company make to meet these requirements?
A
Decrease the temperature value.
B
Increase the temperature value.
C
Decrease the length of output tokens.
D
Increase the maximum generation length.
Correct Answer:
A
|
| question_th |
Q66:
Chapter: - Topic #1
A company wants to develop a large language model (LLM) application by using Amazon Bedrock and customer data that is uploaded to Amazon S3. The company's security policy states that each team can access data for only the team's own customers.Which solution will meet these requirements?
A
Create an Amazon Bedrock custom service role for each team that has access to only the team's customer data.
B
Create a custom service role that has Amazon S3 access. Ask teams to specify the customer name on each Amazon Bedrock request.
C
Redact personal data in Amazon S3. Update the S3 bucket policy to allow team access to customer data.
D
Create one Amazon Bedrock role that has full Amazon S3 access. Create IAM roles for each team that have access to only each team's customer folders.
Correct Answer:
A
|
| question_th |
Q67:
Chapter: - Topic #1
A medical company deployed a disease detection model on Amazon Bedrock. To comply with privacy policies, the company wants to prevent the model from including personal patient information in its responses. The company also wants to receive notification when policy violations occur.Which solution meets these requirements?
A
Use Amazon Macie to scan the model's output for sensitive data and set up alerts for potential violations.
B
Configure AWS CloudTrail to monitor the model's responses and create alerts for any detected personal information.
C
Use Guardrails for Amazon Bedrock to filter content. Set up Amazon CloudWatch alarms for notification of policy violations.
D
Implement Amazon SageMaker Model Monitor to detect data drift and receive alerts when model quality degrades.
Correct Answer:
C
|
| question_th |
Q68:
Chapter: - Topic #1
A company manually reviews all submitted resumes in PDF format. As the company grows, the company expects the volume of resumes to exceed the company's review capacity. The company needs an automated system to convert the PDF resumes into plain text format for additional processing.Which AWS service meets this requirement?
A
Amazon Textract
B
Amazon Personalize
C
Amazon Lex
D
Amazon Transcribe
Correct Answer:
A
|
| question_th |
Q69:
Chapter: - Topic #1
An education provider is building a question and answer application that uses a generative AI model to explain complex concepts. The education provider wants to automatically change the style of the model response depending on who is asking the question. The education provider will give the model the age range of the user who has asked the question.Which solution meets these requirements with the LEAST implementation effort?
A
Fine-tune the model by using additional training data that is representative of the various age ranges that the application will support.
B
Add a role description to the prompt context that instructs the model of the age range that the response should target.
C
Use chain-of-thought reasoning to deduce the correct style and complexity for a response suitable for that user.
D
Summarize the response text depending on the age of the user so that younger users receive shorter responses.
Correct Answer:
B
|
| question_th |
Q70:
Chapter: - Topic #1
Which strategy evaluates the accuracy of a foundation model (FM) that is used in image classification tasks?
A
Calculate the total cost of resources used by the model.
B
Measure the model's accuracy against a predefined benchmark dataset.
C
Count the number of layers in the neural network.
D
Assess the color accuracy of images processed by the model.
Correct Answer:
B
|
| question_th |
Q71:
Chapter: - Topic #1
An accounting firm wants to implement a large language model (LLM) to automate document processing. The firm must proceed responsibly to avoid potential harms.What should the firm do when developing and deploying the LLM? (Choose two.)
A
Include fairness metrics for model evaluation.
B
Adjust the temperature parameter of the model.
C
Modify the training data to mitigate bias.
D
Avoid overfitting on the training data.
E
Apply prompt engineering techniques.
Correct Answer:
A
C
|
| question_th |
Q72:
Chapter: - Topic #1
A company is building an ML model. The company collected new data and analyzed the data by creating a correlation matrix, calculating statistics, and visualizing the data.Which stage of the ML pipeline is the company currently in?
A
Data pre-processing
B
Feature engineering
C
Exploratory data analysis
D
Hyperparameter tuning
Correct Answer:
C
|
| question_th |
Q73:
Chapter: - Topic #1
A company has documents that are missing some words because of a database error. The company wants to build an ML model that can suggest potential words to fill in the missing text.Which type of model meets this requirement?
A
Topic modeling
B
Clustering models
C
Prescriptive ML models
D
BERT-based models
Correct Answer:
D
|
| question_th |
Q74:
Chapter: - Topic #1
A company wants to display the total sales for its top-selling products across various retail locations in the past 12 months.Which AWS solution should the company use to automate the generation of graphs?
A
Amazon Q in Amazon EC2
B
Amazon Q Developer
C
Amazon Q in Amazon QuickSight
D
Amazon Q in AWS Chatbot
Correct Answer:
C
|
| question_th |
Q75:
Chapter: - Topic #1
A company is building a chatbot to improve user experience. The company is using a large language model (LLM) from Amazon Bedrock for intent detection. The company wants to use few-shot learning to improve intent detection accuracy.Which additional data does the company need to meet these requirements?
A
Pairs of chatbot responses and correct user intents
B
Pairs of user messages and correct chatbot responses
C
Pairs of user messages and correct user intents
D
Pairs of user intents and correct chatbot responses
Correct Answer:
C
|
| question_th |
Q76:
Chapter: - Topic #1
A company is using few-shot prompting on a base model that is hosted on Amazon Bedrock. The model currently uses 10 examples in the prompt. The model is invoked once daily and is performing well. The company wants to lower the monthly cost.Which solution will meet these requirements?
A
Customize the model by using fine-tuning.
B
Decrease the number of tokens in the prompt.
C
Increase the number of tokens in the prompt.
D
Use Provisioned Throughput.
Correct Answer:
B
|
| question_th |
Q77:
Chapter: - Topic #1
An AI practitioner is using a large language model (LLM) to create content for marketing campaigns. The generated content sounds plausible and factual but is incorrect.Which problem is the LLM having?
A
Data leakage
B
Hallucination
C
Overfitting
D
Underfitting
Correct Answer:
B
|
| question_th |
Q78:
Chapter: - Topic #1
An AI practitioner trained a custom model on Amazon Bedrock by using a training dataset that contains confidential data. The AI practitioner wants to ensure that the custom model does not generate inference responses based on confidential data.How should the AI practitioner prevent responses based on confidential data?
A
Delete the custom model. Remove the confidential data from the training dataset. Retrain the custom model.
B
Mask the confidential data in the inference responses by using dynamic data masking.
C
Encrypt the confidential data in the inference responses by using Amazon SageMaker.
D
Encrypt the confidential data in the custom model by using AWS Key Management Service (AWS KMS).
Correct Answer:
A
|
| question_th |
Q79:
Chapter: - Topic #1
A company has built a solution by using generative AI. The solution uses large language models (LLMs) to translate training manuals from English into other languages. The company wants to evaluate the accuracy of the solution by examining the text generated for the manuals.Which model evaluation strategy meets these requirements?
A
Bilingual Evaluation Understudy (BLEU)
B
Root mean squared error (RMSE)
C
Recall-Oriented Understudy for Gisting Evaluation (ROUGE)
D
F1 score
Correct Answer:
A
|
| question_th |
Q80:
Chapter: - Topic #1
A large retailer receives thousands of customer support inquiries about products every day. The customer support inquiries need to be processed and responded to quickly. The company wants to implement Agents for Amazon Bedrock.What are the key benefits of using Amazon Bedrock agents that could help this retailer?
A
Generation of custom foundation models (FMs) to predict customer needs
B
Automation of repetitive tasks and orchestration of complex workflows
C
Automatically calling multiple foundation models (FMs) and consolidating the results
D
Selecting the foundation model (FM) based on predefined criteria and metrics
Correct Answer:
B
|
| question_th |
Q81:
Chapter: - Topic #1
Which option is a benefit of ongoing pre-training when fine-tuning a foundation model (FM)?
A
Helps decrease the model's complexity
B
Improves model performance over time
C
Decreases the training time requirement
D
Optimizes model inference time
Correct Answer:
B
|
| question_th |
Q82:
Chapter: - Topic #1
What are tokens in the context of generative AI models?
A
Tokens are the basic units of input and output that a generative AI model operates on, representing words, subwords, or other linguistic units.
B
Tokens are the mathematical representations of words or concepts used in generative AI models.
C
Tokens are the pre-trained weights of a generative AI model that are fine-tuned for specific tasks.
D
Tokens are the specific prompts or instructions given to a generative AI model to generate output.
Correct Answer:
A
|
| question_th |
Q83:
Chapter: - Topic #1
A company wants to assess the costs that are associated with using a large language model (LLM) to generate inferences. The company wants to use Amazon Bedrock to build generative AI applications.Which factor will drive the inference costs?
A
Number of tokens consumed
B
Temperature value
C
Amount of data used to train the LLM
D
Total training time
Correct Answer:
A
|
| question_th |
Q84:
Chapter: - Topic #1
A company is using Amazon SageMaker Studio notebooks to build and train ML models. The company stores the data in an Amazon S3 bucket. The company needs to manage the flow of data from Amazon S3 to SageMaker Studio notebooks.Which solution will meet this requirement?
A
Use Amazon Inspector to monitor SageMaker Studio.
B
Use Amazon Macie to monitor SageMaker Studio.
C
Configure SageMaker to use a VPC with an S3 endpoint.
D
Configure SageMaker to use S3 Glacier Deep Archive.
Correct Answer:
C
|
| question_th |
Q85:
Chapter: - Topic #1
A company has a foundation model (FM) that was customized by using Amazon Bedrock to answer customer queries about products. The company wants to validate the model's responses to new types of queries. The company needs to upload a new dataset that Amazon Bedrock can use for validation.Which AWS service meets these requirements?
A
Amazon S3
B
Amazon Elastic Block Store (Amazon EBS)
C
Amazon Elastic File System (Amazon EFS)
D
AWS Snowcone
Correct Answer:
A
|
| question_th |
Q86:
Chapter: - Topic #1
Which prompting attack directly exposes the configured behavior of a large language model (LLM)?
A
Prompted persona switches
B
Exploiting friendliness and trust
C
Ignoring the prompt template
D
Extracting the prompt template
Correct Answer:
D
|
| question_th |
Q87:
Chapter: - Topic #1
A company wants to use Amazon Bedrock. The company needs to review which security aspects the company is responsible for when using Amazon Bedrock.Which security aspect will the company be responsible for?
A
Patching and updating the versions of Amazon Bedrock
B
Protecting the infrastructure that hosts Amazon Bedrock
C
Securing the company's data in transit and at rest
D
Provisioning Amazon Bedrock within the company network
Correct Answer:
C
|
| question_th |
Q88:
Chapter: - Topic #1
A social media company wants to use a large language model (LLM) to summarize messages. The company has chosen a few LLMs that are available on Amazon SageMaker JumpStart. The company wants to compare the generated output toxicity of these models.Which strategy gives the company the ability to evaluate the LLMs with the LEAST operational overhead?
A
Crowd-sourced evaluation
B
Automatic model evaluation
C
Model evaluation with human workers
D
Reinforcement learning from human feedback (RLHF)
Correct Answer:
B
|
| question_th |
Q89:
Chapter: - Topic #1
A company is testing the security of a foundation model (FM). During testing, the company wants to get around the safety features and make harmful content.Which security technique is this an example of?
A
Fuzzing training data to find vulnerabilities
B
Denial of service (DoS)
C
Penetration testing with authorization
D
Jailbreak
Correct Answer:
D
|
| question_th |
Q90:
Chapter: - Topic #1
A company needs to use Amazon SageMaker for model training and inference. The company must comply with regulatory requirements to run SageMaker jobs in an isolated environment without internet access.Which solution will meet these requirements?
A
Run SageMaker training and inference by using SageMaker Experiments.
B
Run SageMaker training and Inference by using network Isolation.
C
Encrypt the data at rest by using encryption for SageMaker geospatial capabilities.
D
Associate appropriate AWS Identity and Access Management (IAM) roles with the SageMaker jobs.
Correct Answer:
B
|
| question_th |
Q91:
Chapter: - Topic #1
An ML research team develops custom ML models. The model artifacts are shared with other teams for integration into products and services. The ML team retains the model training code and data. The ML team wants to build a mechanism that the ML team can use to audit models.Which solution should the ML team use when publishing the custom ML models?
A
Create documents with the relevant information. Store the documents in Amazon S3.
B
Use AWS AI Service Cards for transparency and understanding models.
C
Create Amazon SageMaker Model Cards with intended uses and training and inference details.
D
Create model training scripts. Commit the model training scripts to a Git repository.
Correct Answer:
C
|
| question_th |
Q92:
Chapter: - Topic #1
A software company builds tools for customers. The company wants to use AI to increase software development productivity.Which solution will meet these requirements?
A
Use a binary classification model to generate code reviews.
B
Install code recommendation software in the company's developer tools.
C
Install a code forecasting tool to predict potential code issues.
D
Use a natural language processing (NLP) tool to generate code.
Correct Answer:
B
|
| question_th |
Q93:
Chapter: - Topic #1
A retail store wants to predict the demand for a specific product for the next few weeks by using the Amazon SageMaker DeepAR forecasting algorithm.Which type of data will meet this requirement?
A
Text data
B
Image data
C
Time series data
D
Binary data
Correct Answer:
C
|
| question_th |
Q94:
Chapter: - Topic #1
A large retail bank wants to develop an ML system to help the risk management team decide on loan allocations for different demographics.What must the bank do to develop an unbiased ML model?
A
Reduce the size of the training dataset.
B
Ensure that the ML model predictions are consistent with historical results.
C
Create a different ML model for each demographic group.
D
Measure class imbalance on the training dataset. Adapt the training process accordingly.
Correct Answer:
D
|
| question_th |
Q95:
Chapter: - Topic #1
Which prompting technique can protect against prompt injection attacks?
A
Adversarial prompting
B
Zero-shot prompting
C
Least-to-most prompting
D
Chain-of-thought prompting
Correct Answer:
A
|
| question_th |
Q96:
Chapter: - Topic #1
A company has fine-tuned a large language model (LLM) to answer questions for a help desk. The company wants to determine if the fine-tuning has enhanced the model's accuracy.Which metric should the company use for the evaluation?
A
Precision
B
Time to first token
C
F1 score
D
Word error rate
Correct Answer:
C
|
| question_th |
Q97:
Chapter: - Topic #1
A company is using Retrieval Augmented Generation (RAG) with Amazon Bedrock and Stable Diffusion to generate product images based on text descriptions. The results are often random and lack specific details. The company wants to increase the specificity of the generated images.Which solution meets these requirements?
A
Increase the number of generation steps.
B
Use the MASK_IMAGE_BLACK mask source option.
C
Increase the classifier-free guidance (CFG) scale.
D
Increase the prompt strength.
Correct Answer:
C
|
| question_th |
Q98:
Chapter: - Topic #1
A company wants to implement a large language model (LLM) based chatbot to provide customer service agents with real-time contextual responses to customers' inquiries. The company will use the company's policies as the knowledge base.Which solution will meet these requirements MOST cost-effectively?
A
Retrain the LLM on the company policy data.
B
Fine-tune the LLM on the company policy data.
C
Implement Retrieval Augmented Generation (RAG) for in-context responses.
D
Use pre-training and data augmentation on the company policy data.
Correct Answer:
C
|
| question_th |
Q99:
Chapter: - Topic #1
A company wants to create a new solution by using AWS Glue. The company has minimal programming experience with AWS Glue.Which AWS service can help the company use AWS Glue?
A
Amazon Q Developer
B
AWS Config
C
Amazon Personalize
D
Amazon Comprehend
Correct Answer:
A
|
| question_th |
Q100:
Chapter: - Topic #1
A company is developing a mobile ML app that uses a phone's camera to diagnose and treat insect bites. The company wants to train an image classification model by using a diverse dataset of insect bite photos from different genders, ethnicities, and geographic locations around the world.Which principle of responsible AI does the company demonstrate in this scenario?
A
Fairness
B
Explainability
C
Governance
D
Transparency
Correct Answer:
A
|
| question_th |
Q101:
Chapter: - Topic #1
A company is developing an ML model to make loan approvals. The company must implement a solution to detect bias in the model. The company must also be able to explain the model's predictions.Which solution will meet these requirements?
A
Amazon SageMaker Clarify
B
Amazon SageMaker Data Wrangler
C
Amazon SageMaker Model Cards
D
AWS AI Service Cards
Correct Answer:
A
|
| question_th |
Q102:
Chapter: - Topic #1
A company has developed a generative text summarization model by using Amazon Bedrock. The company will use Amazon Bedrock automatic model evaluation capabilities.Which metric should the company use to evaluate the accuracy of the model?
A
Area Under the ROC Curve (AUC) score
B
F1 score
C
BERTScore
D
Real world knowledge (RWK) score
Correct Answer:
C
|
| question_th |
Q103:
Chapter: - Topic #1
An AI practitioner wants to predict the classification of flowers based on petal length, petal width, sepal length, and sepal width.Which algorithm meets these requirements?
A
K-nearest neighbors (k-NN)
B
K-mean
C
Autoregressive Integrated Moving Average (ARIMA)
D
Linear regression
Correct Answer:
A
|
| question_th |
Q104:
Chapter: - Topic #1
A company is using custom models in Amazon Bedrock for a generative AI application. The company wants to use a company managed encryption key to encrypt the model artifacts that the model customization jobs create.Which AWS service meets these requirements?
A
AWS Key Management Service (AWS KMS)
B
Amazon Inspector
C
Amazon Macie
D
AWS Secrets Manager
Correct Answer:
A
|
| question_th |
Q105:
Chapter: - Topic #1
A company wants to use large language models (LLMs) to produce code from natural language code comments.Which LLM feature meets these requirements?
A
Text summarization
B
Text generation
C
Text completion
D
Text classification
Correct Answer:
B
|
| question_th |
Q106:
Chapter: - Topic #1
A company is introducing a mobile app that helps users learn foreign languages. The app makes text more coherent by calling a large language model (LLM). The company collected a diverse dataset of text and supplemented the dataset with examples of more readable versions. The company wants the LLM output to resemble the provided examples.Which metric should the company use to assess whether the LLM meets these requirements?
A
Value of the loss function
B
Semantic robustness
C
Recall-Oriented Understudy for Gisting Evaluation (ROUGE) score
D
Latency of the text generation
Correct Answer:
C
|
| question_th |
Q107:
Chapter: - Topic #1
A company notices that its foundation model (FM) generates images that are unrelated to the prompts. The company wants to modify the prompt techniques to decrease unrelated images.Which solution meets these requirements?
A
Use zero-shot prompts.
B
Use negative prompts.
C
Use positive prompts.
D
Use ambiguous prompts.
Correct Answer:
B
|
| question_th |
Q108:
Chapter: - Topic #1
A company wants to use a large language model (LLM) to generate concise, feature-specific descriptions for the company’s products.Which prompt engineering technique meets these requirements?
A
Create one prompt that covers all products. Edit the responses to make the responses more specific, concise, and tailored to each product.
B
Create prompts for each product category that highlight the key features. Include the desired output format and length for each prompt response.
C
Include a diverse range of product features in each prompt to generate creative and unique descriptions.
D
Provide detailed, product-specific prompts to ensure precise and customized descriptions.
Correct Answer:
B
|
| question_th |
Q109:
Chapter: - Topic #1
A company is developing an ML model to predict customer churn. The model performs well on the training dataset but does not accurately predict churn for new data.Which solution will resolve this issue?
A
Decrease the regularization parameter to increase model complexity.
B
Increase the regularization parameter to decrease model complexity.
C
Add more features to the input data.
D
Train the model for more epochs.
Correct Answer:
B
|
| question_th |
Q110:
Chapter: - Topic #1
A company is implementing intelligent agents to provide conversational search experiences for its customers. The company needs a database service that will support storage and queries of embeddings from a generative AI model as vectors in the database.Which AWS service will meet these requirements?
A
Amazon Athena
B
Amazon Aurora PostgreSQL
C
Amazon Redshift
D
Amazon EMR
Correct Answer:
B
|
| question_th |
Q111:
Chapter: - Topic #1
A financial institution is building an AI solution to make loan approval decisions by using a foundation model (FM). For security and audit purposes, the company needs the AI solution's decisions to be explainable.Which factor relates to the explainability of the AI solution's decisions?
A
Model complexity
B
Training time
C
Number of hyperparameters
D
Deployment time
Correct Answer:
A
|
| question_th |
Q112:
Chapter: - Topic #1
A pharmaceutical company wants to analyze user reviews of new medications and provide a concise overview for each medication.Which solution meets these requirements?
A
Create a time-series forecasting model to analyze the medication reviews by using Amazon Personalize.
B
Create medication review summaries by using Amazon Bedrock large language models (LLMs).
C
Create a classification model that categorizes medications into different groups by using Amazon SageMaker.
D
Create medication review summaries by using Amazon Rekognition.
Correct Answer:
B
|
| question_th |
Q113:
Chapter: - Topic #1
A company wants to build a lead prioritization application for its employees to contact potential customers. The application must give employees the ability to view and adjust the weights assigned to different variables in the model based on domain knowledge and expertise.Which ML model type meets these requirements?
A
Logistic regression model
B
Deep learning model built on principal components
C
K-nearest neighbors (k-NN) model
D
Neural network
Correct Answer:
A
|
| question_th |
Q114:
Chapter: - Topic #1
Which strategy will determine if a foundation model (FM) effectively meets business objectives?
A
Evaluate the model's performance on benchmark datasets.
B
Analyze the model's architecture and hyperparameters.
C
Assess the model's alignment with specific use cases.
D
Measure the computational resources required for model deployment.
Correct Answer:
C
|
| question_th |
Q115:
Chapter: - Topic #1
A company needs to train an ML model to classify images of different types of animals. The company has a large dataset of labeled images and will not label more data.Which type of learning should the company use to train the model?
A
Supervised learning
B
Unsupervised learning
C
Reinforcement learning
D
Active learning
Correct Answer:
A
|
| question_th |
Q116:
Chapter: - Topic #1
Which phase of the ML lifecycle determines compliance and regulatory requirements?
A
Feature engineering
B
Model training
C
Data collection
D
Business goal identification
Correct Answer:
D
|
| question_th |
Q117:
Chapter: - Topic #1
A food service company wants to develop an ML model to help decrease daily food waste and increase sales revenue. The company needs to continuously improve the model's accuracy.Which solution meets these requirements?
A
Use Amazon SageMaker and iterate with newer data.
B
Use Amazon Personalize and iterate with historical data.
C
Use Amazon CloudWatch to analyze customer orders.
D
Use Amazon Rekognition to optimize the model.
Correct Answer:
A
|
| question_th |
Q118:
Chapter: - Topic #1
A company has developed an ML model to predict real estate sale prices. The company wants to deploy the model to make predictions without managing servers or infrastructure.Which solution meets these requirements?
A
Deploy the model on an Amazon EC2 instance.
B
Deploy the model on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
C
Deploy the model by using Amazon CloudFront with an Amazon S3 integration.
D
Deploy the model by using an Amazon SageMaker endpoint.
Correct Answer:
D
|
| question_th |
Q119:
Chapter: - Topic #1
A company wants to develop an AI application to help its employees check open customer claims, identify details for a specific claim, and access documents for a claim.Which solution meets these requirements?
A
Use Agents for Amazon Bedrock with Amazon Fraud Detector to build the application.
B
Use Agents for Amazon Bedrock with Amazon Bedrock knowledge bases to build the application.
C
Use Amazon Personalize with Amazon Bedrock knowledge bases to build the application.
D
Use Amazon SageMaker to build the application by training a new ML model.
Correct Answer:
B
|
| question_th |
Q120:
Chapter: - Topic #1
A manufacturing company uses AI to inspect products and find any damages or defects.Which type of AI application is the company using?
A
Recommendation system
B
Natural language processing (NLP)
C
Computer vision
D
Image processing
Correct Answer:
C
|
| question_th |
Q121:
Chapter: - Topic #1
A company wants to create an ML model to predict customer satisfaction. The company needs fully automated model tuning.Which AWS service meets these requirements?
A
Amazon Personalize
B
Amazon SageMaker
C
Amazon Athena
D
Amazon Comprehend
Correct Answer:
B
|
| question_th |
Q122:
Chapter: - Topic #1
Which technique can a company use to lower bias and toxicity in generative AI applications during the post-processing ML lifecycle?
A
Human-in-the-loop
B
Data augmentation
C
Feature engineering
D
Adversarial training
Correct Answer:
A
|
| question_th |
Q123:
Chapter: - Topic #1
A bank has fine-tuned a large language model (LLM) to expedite the loan approval process. During an external audit of the model, the company discovered that the model was approving loans at a faster pace for a specific demographic than for other demographics.How should the bank fix this issue MOST cost-effectively?
A
Include more diverse training data. Fine-tune the model again by using the new data.
B
Use Retrieval Augmented Generation (RAG) with the fine-tuned model.
C
Use AWS Trusted Advisor checks to eliminate bias.
D
Pre-train a new LLM with more diverse training data.
Correct Answer:
A
|
| question_th |
Q124:
Chapter: - Topic #1
A company needs to log all requests made to its Amazon Bedrock API. The company must retain the logs securely for 5 years at the lowest possible cost.Which combination of AWS service and storage class meets these requirements? (Choose two.)
A
AWS CloudTrail
B
Amazon CloudWatch
C
AWS Audit Manager
D
Amazon S3 Intelligent-Tiering
E
Amazon S3 Standard
Correct Answer:
A
D
|
| question_th |
Q125:
Chapter: - Topic #1
An ecommerce company wants to improve search engine recommendations by customizing the results for each user of the company’s ecommerce platform.Which AWS service meets these requirements?
A
Amazon Personalize
B
Amazon Kendra
C
Amazon Rekognition
D
Amazon Transcribe
Correct Answer:
A
|
| question_th |
Q126:
Chapter: - Topic #1
A hospital is developing an AI system to assist doctors in diagnosing diseases based on patient records and medical images. To comply with regulations, the sensitive patient data must not leave the country the data is located in.Which data governance strategy will ensure compliance and protect patient privacy?
A
Data residency
B
Data quality
C
Data discoverability
D
Data enrichment
Correct Answer:
A
|
| question_th |
Q127:
Chapter: - Topic #1
A company needs to monitor the performance of its ML systems by using a highly scalable AWS service.Which AWS service meets these requirements?
A
Amazon CloudWatch
B
AWS CloudTrail
C
AWS Trusted Advisor
D
AWS Config
Correct Answer:
A
|
| question_th |
Q128:
Chapter: - Topic #1
An AI practitioner is developing a prompt for an Amazon Titan model. The model is hosted on Amazon Bedrock. The AI practitioner is using the model to solve numerical reasoning challenges. The AI practitioner adds the following phrase to the end of the prompt: “Ask the model to show its work by explaining its reasoning step by step.”Which prompt engineering technique is the AI practitioner using?
A
Chain-of-thought prompting
B
Prompt injection
C
Few-shot prompting
D
Prompt templating
Correct Answer:
A
|
| question_th |
Q129:
Chapter: - Topic #1
Which AWS service makes foundation models (FMs) available to help users build and scale generative AI applications?
A
Amazon Q Developer
B
Amazon Bedrock
C
Amazon Kendra
D
Amazon Comprehend
Correct Answer:
B
|
| question_th |
Q130:
Chapter: - Topic #1
A company is building a mobile app for users who have a visual impairment. The app must be able to hear what users say and provide voice responses.Which solution will meet these requirements?
A
Use a deep learning neural network to perform speech recognition.
B
Build ML models to search for patterns in numeric data.
C
Use generative AI summarization to generate human-like text.
D
Build custom models for image classification and recognition.
Correct Answer:
A
|
| question_th |
Q131:
Chapter: - Topic #1
A company wants to enhance response quality for a large language model (LLM) for complex problem-solving tasks. The tasks require detailed reasoning and a step-by-step explanation process.Which prompt engineering technique meets these requirements?
A
Few-shot prompting
B
Zero-shot prompting
C
Directional stimulus prompting
D
Chain-of-thought prompting
Correct Answer:
D
|
| question_th |
Q132:
Chapter: - Topic #1
A company wants to keep its foundation model (FM) relevant by using the most recent data. The company wants to implement a model training strategy that includes regular updates to the FM.Which solution meets these requirements?
A
Batch learning
B
Continuous pre-training
C
Static training
D
Latent training
Correct Answer:
B
|
| question_th |
Q133:
Chapter: - Topic #1
Which option is a characteristic of AI governance frameworks for building trust and deploying human-centered AI technologies?
A
Expanding initiatives across business units to create long-term business value
B
Ensuring alignment with business standards, revenue goals, and stakeholder expectations
C
Overcoming challenges to drive business transformation and growth
D
Developing policies and guidelines for data, transparency, responsible AI, and compliance
Correct Answer:
D
|
| question_th |
Q134:
Chapter: - Topic #1
An ecommerce company is using a generative AI chatbot to respond to customer inquiries. The company wants to measure the financial effect of the chatbot on the company’s operations.Which metric should the company use?
A
Number of customer inquiries handled
B
Cost of training AI models
C
Cost for each customer conversation
D
Average handled time (AHT)
Correct Answer:
C
|
| question_th |
Q135:
Chapter: - Topic #1
A company wants to find groups for its customers based on the customers’ demographics and buying patterns.Which algorithm should the company use to meet this requirement?
A
K-nearest neighbors (k-NN)
B
K-means
C
Decision tree
D
Support vector machine
Correct Answer:
B
|
| question_th |
Q136:
Chapter: - Topic #1
A company’s large language model (LLM) is experiencing hallucinations.How can the company decrease hallucinations?
A
Set up Agents for Amazon Bedrock to supervise the model training.
B
Use data pre-processing and remove any data that causes hallucinations.
C
Decrease the temperature inference parameter for the model.
D
Use a foundation model (FM) that is trained to not hallucinate.
Correct Answer:
C
|
| question_th |
Q137:
Chapter: - Topic #1
A company is using a large language model (LLM) on Amazon Bedrock to build a chatbot. The chatbot processes customer support requests. To resolve a request, the customer and the chatbot must interact a few times.Which solution gives the LLM the ability to use content from previous customer messages?
A
Turn on model invocation logging to collect messages.
B
Add messages to the model prompt.
C
Use Amazon Personalize to save conversation history.
D
Use Provisioned Throughput for the LLM.
Correct Answer:
B
|
| question_th |
Q138:
Chapter: - Topic #1
A company’s employees provide product descriptions and recommendations to customers when customers call the customer service center. These recommendations are based on where the customers are located. The company wants to use foundation models (FMs) to automate this process.Which AWS service meets these requirements?
A
Amazon Macie
B
Amazon Transcribe
C
Amazon Bedrock
D
Amazon Textract
Correct Answer:
C
|
| question_th |
Q139:
Chapter: - Topic #1
A company wants to upload customer service email messages to Amazon S3 to develop a business analysis application. The messages sometimes contain sensitive data. The company wants to receive an alert every time sensitive information is found.Which solution fully automates the sensitive information detection process with the LEAST development effort?
A
Configure Amazon Macie to detect sensitive information in the documents that are uploaded to Amazon S3.
B
Use Amazon SageMaker endpoints to deploy a large language model (LLM) to redact sensitive data.
C
Develop multiple regex patterns to detect sensitive data. Expose the regex patterns on an Amazon SageMaker notebook.
D
Ask the customers to avoid sharing sensitive information in their email messages.
Correct Answer:
A
|
| question_th |
Q140:
Chapter: - Topic #1
Which option is a benefit of using Amazon SageMaker Model Cards to document AI models?
A
Providing a visually appealing summary of a mode’s capabilities.
B
Standardizing information about a model’s purpose, performance, and limitations.
C
Reducing the overall computational requirements of a model.
D
Physically storing models for archival purposes.
Correct Answer:
B
|
| question_th |
Q141:
Chapter: - Topic #1
What does an F1 score measure in the context of foundation model (FM) performance?
A
Model precision and recall
B
Model speed in generating responses
C
Financial cost of operating the model
D
Energy efficiency of the model’s computations
Correct Answer:
A
|
| question_th |
Q142:
Chapter: - Topic #1
A company deployed an AI/ML solution to help customer service agents respond to frequently asked questions. The questions can change over time. The company wants to give customer service agents the ability to ask questions and receive automatically generated answers to common customer questions.Which strategy will meet these requirements MOST cost-effectively?
A
Fine-tune the model regularly.
B
Train the model by using context data.
C
Pre-train and benchmark the model by using context data.
D
Use Retrieval Augmented Generation (RAG) with prompt engineering techniques.
Correct Answer:
D
|
| question_th |
Q143:
Chapter: - Topic #1
A company built an AI-powered resume screening system. The company used a large dataset to train the model. The dataset contained resumes that were not representative of all demographics.Which core dimension of responsible AI does this scenario present?
A
Fairness
B
Explainability
C
Privacy and security
D
Transparency
Correct Answer:
A
|
| question_th |
Q144:
Chapter: - Topic #1
A global financial company has developed an ML application to analyze stock market data and provide stock market trends. The company wants to continuously monitor the application development phases and to ensure that company policies and industry regulations are followed.Which AWS services will help the company assess compliance requirements? (Choose two.)
A
AWS Audit Manager
B
AWS Config
C
Amazon Inspector
D
Amazon CloudWatch
E
AWS CloudTrail
Correct Answer:
A
B
|
| question_th |
Q145:
Chapter: - Topic #1
A company wants to improve the accuracy of the responses from a generative AI application. The application uses a foundation model (FM) on Amazon Bedrock.Which solution meets these requirements MOST cost-effectively?
A
Fine-tune the FM.
B
Retrain the FM.
C
Train a new FM.
D
Use prompt engineering.
Correct Answer:
D
|
| question_th |
Q146:
Chapter: - Topic #1
A company wants to identify harmful language in the comments section of social media posts by using an ML model. The company will not use labeled data to train the model.Which strategy should the company use to identify harmful language?
A
Use Amazon Rekognition moderation.
B
Use Amazon Comprehend toxicity detection.
C
Use Amazon SageMaker built-in algorithms to train the model.
D
Use Amazon Polly to monitor comments.
Correct Answer:
B
|
| question_th |
Q147:
Chapter: - Topic #1
A media company wants to analyze viewer behavior and demographics to recommend personalized content. The company wants to deploy a customized ML model in its production environment. The company also wants to observe if the model quality drifts over time.Which AWS service or feature meets these requirements?
A
Amazon Rekognition
B
Amazon SageMaker Clarify
C
Amazon Comprehend
D
Amazon SageMaker Model Monitor
Correct Answer:
D
|
| question_th |
Q148:
Chapter: - Topic #1
A company is deploying AI/ML models by using AWS services. The company wants to offer transparency into the models’ decision-making processes and provide explanations for the model outputs.Which AWS service or feature meets these requirements?
A
Amazon SageMaker Model Cards
B
Amazon Rekognition
C
Amazon Comprehend
D
Amazon Lex
Correct Answer:
A
|
| question_th |
Q149:
Chapter: - Topic #1
A manufacturing company wants to create product descriptions in multiple languages.Which AWS service will automate this task?
A
Amazon Translate
B
Amazon Transcribe
C
Amazon Kendra
D
Amazon Polly
Correct Answer:
A
|
| question_th |
Q150:
Chapter: - Topic #1
Which AWS feature records details about ML instance data for governance and reporting?
A
Amazon SageMaker Model Cards
B
Amazon SageMaker Debugger
C
Amazon SageMaker Model Monitor
D
Amazon SageMaker JumpStart
Correct Answer:
A
|
| question_th |
Q151:
Chapter: - Topic #1
A financial company is using ML to help with some of the company’s tasks.Which option is a use of generative AI models?
A
Summarizing customer complaints
B
Classifying customers based on product usage
C
Segmenting customers based on type of investments
D
Forecasting revenue for certain products
Correct Answer:
A
|
| question_th |
Q152:
Chapter: - Topic #1
A medical company wants to develop an AI application that can access structured patient records, extract relevant information, and generate concise summaries.Which solution will meet these requirements?
A
Use Amazon Comprehend Medical to extract relevant medical entities and relationships. Apply rule-based logic to structure and format summaries.
B
Use Amazon Personalize to analyze patient engagement patterns. Integrate the output with a general purpose text summarization tool.
C
Use Amazon Textract to convert scanned documents into digital text. Design a keyword extraction system to generate summaries.
D
Implement Amazon Kendra to provide a searchable index for medical records. Use a template-based system to format summaries.
Correct Answer:
A
|
| question_th |
Q153:
Chapter: - Topic #1
Which option describes embeddings in the context of AI?
A
A method for compressing large datasets
B
An encryption method for securing sensitive data
C
A method for visualizing high-dimensional data
D
A numerical method for data representation in a reduced dimensionality space
Correct Answer:
D
|
| question_th |
Q154:
Chapter: - Topic #1
A company is building an AI application to summarize books of varying lengths. During testing, the application fails to summarize some books.Why does the application fail to summarize some books?
A
The temperature is set too high.
B
The selected model does not support fine-tuning.
C
The Top P value is too high.
D
The input tokens exceed the model’s context size.
Correct Answer:
D
|
| question_th |
Q155:
Chapter: - Topic #1
An airline company wants to build a conversational AI assistant to answer customer questions about flight schedules, booking, and payments. The company wants to use large language models (LLMs) and a knowledge base to create a text-based chatbot interface.Which solution will meet these requirements with the LEAST development effort?
A
Train models on Amazon SageMaker Autopilot.
B
Develop a Retrieval Augmented Generation (RAG) agent by using Amazon Bedrock.
C
Create a Python application by using Amazon Q Developer.
D
Fine-tune models on Amazon SageMaker Jumpstart.
Correct Answer:
B
|
| question_th |
Q156:
Chapter: - Topic #1
What is tokenization used for in natural language processing (NLP)?
A
To encrypt text data
B
To compress text files
C
To break text into smaller units for processing
D
To translate text between languages
Correct Answer:
C
|
| question_th |
Q157:
Chapter: - Topic #1
Which option is a characteristic of transformer-based language models?
A
Transformer-based language models use convolutional layers to apply filters across an input to capture local patterns through filtered views.
B
Transformer-based language models can process only text data.
C
Transformer-based language models use self-attention mechanisms to capture contextual relationships.
D
Transformer-based language models process data sequences one element at a time in cyclic iterations.
Correct Answer:
C
|
| question_th |
Q158:
Chapter: - Topic #1
A financial company is using AI systems to obtain customer credit scores as part of the loan application process. The company wants to expand to a new market in a different geographic area. The company must ensure that it can operate in that geographic area.Which compliance laws should the company review?
A
Local health data protection laws
B
Local payment card data protection laws
C
Local education privacy laws
D
Local algorithm accountability laws
Correct Answer:
D
|
| question_th |
Q159:
Chapter: - Topic #1
A company uses Amazon Bedrock for its generative AI application. The company wants to use Amazon Bedrock Guardrails to detect and filter harmful user inputs and model-generated outputs.Which content categories can the guardrails filter? (Choose two.)
A
Hate
B
Politics
C
Violence
D
Gambling
E
Religion
Correct Answer:
A
C
|
| question_th |
Q160:
Chapter: - Topic #1
Which scenario describes a potential risk and limitation of prompt engineering in the context of a generative AI model?
A
Prompt engineering does not ensure that the model always produces consistent and deterministic outputs, eliminating the need for validation.
B
Prompt engineering could expose the model to vulnerabilities such as prompt injection attacks.
C
Properly designed prompts reduce but do not eliminate the risk of data poisoning or model hijacking.
D
Prompt engineering does not ensure that the model will consistently generate highly reliable outputs when working with real-world data.
Correct Answer:
B
|
| question_th |
Q161:
Chapter: - Topic #1
A publishing company built a Retrieval Augmented Generation (RAG) based solution to give its users the ability to interact with published content. New content is published daily. The company wants to provide a near real-time experience to users.Which steps in the RAG pipeline should the company implement by using offline batch processing to meet these requirements? (Choose two.)
A
Generation of content embeddings
B
Generation of embeddings for user queries
C
Creation of the search index
D
Retrieval of relevant content
E
Response generation for the user
Correct Answer:
A
|
| question_th |
Q162:
Chapter: - Topic #1
Which technique breaks a complex task into smaller subtasks that are sent sequentially to a large language model (LLM)?
A
One-shot prompting
B
Prompt chaining
C
Tree of thoughts
D
Retrieval Augmented Generation (RAG)
Correct Answer:
B
|
| question_th |
Q163:
Chapter: - Topic #1
An AI practitioner needs to improve the accuracy of a natural language generation model. The model uses rapidly changing inventory data.Which technique will improve the model's accuracy?
A
Transfer learning
B
Federated learning
C
Retrieval Augmented Generation (RAG)
D
One-shot prompting
Correct Answer:
C
|
| question_th |
Q164:
Chapter: - Topic #1
A company wants to collaborate with several research institutes to develop an AI model. The company needs standardized documentation of model version tracking and a record of model development.Which solution meets these requirements?
A
Track the model changes by using Git.
B
Track the model changes by using Amazon Fraud Detector.
C
Track the model changes by using Amazon SageMaker Model Cards.
D
Track the model changes by using Amazon Comprehend.
Correct Answer:
C
|
| question_th |
Q165:
Chapter: - Topic #1
A company that uses multiple ML models wants to identify changes in original model quality so that the company can resolve any issues.Which AWS service or feature meets these requirements?
A
Amazon SageMaker JumpStart
B
Amazon SageMaker HyperPod
C
Amazon SageMaker Data Wrangler
D
Amazon SageMaker Model Monitor
Correct Answer:
D
|
| question_th |
Q166:
Chapter: - Topic #1
What is the purpose of chunking in Retrieval Augmented Generation (RAG)?
A
To avoid database storage limitations for large text documents by storing parts or chunks of the text
B
To improve efficiency by avoiding the need to convert large text into vector embeddings
C
To improve the contextual relevancy of results retrieved from the vector index
D
To decrease the cost of storage by storing parts or chunks of the text
Correct Answer:
C
|
| question_th |
Q167:
Chapter: - Topic #1
A company is developing an editorial assistant application that uses generative AI. During the pilot phase, usage is low and application performance is not a concern. The company cannot predict application usage after the application is fully deployed and wants to minimize application costs.Which solution will meet these requirements?
A
Use GPU-powered Amazon EC2 instances.
B
Use Amazon Bedrock with Provisioned Throughput.
C
Use Amazon Bedrock with On-Demand Throughput.
D
Use Amazon SageMaker JumpStart.
Correct Answer:
C
|
| question_th |
Q168:
Chapter: - Topic #1
A company deployed a Retrieval Augmented Generation (RAG) application on Amazon Bedrock that gathers financial news to distribute in daily newsletters. Users have recently reported politically influenced ideas in the newsletters.Which Amazon Bedrock guardrail can identify and filter this content?
A
Word filters
B
Denied topics
C
Sensitive information filters
D
Content filters
Correct Answer:
B
|
| question_th |
Q169:
Chapter: - Topic #1
A financial company is developing a fraud detection system that flags potential fraud cases in credit card transactions. Employees will evaluate the flagged fraud cases. The company wants to minimize the amount of time the employees spend reviewing flagged fraud cases that are not actually fraudulent.Which evaluation metric meets these requirements?
A
Recall
B
Accuracy
C
Precision
D
Lift chart
Correct Answer:
C
|
| question_th |
Q170:
Chapter: - Topic #1
A company designed an AI-powered agent to answer customer inquiries based on product manuals.Which strategy can improve customer confidence levels in the AI-powered agent's responses?
A
Writing the confidence level in the response
B
Including referenced product manual links in the response
C
Designing an agent avatar that looks like a computer
D
Training the agent to respond in the company's language style
Correct Answer:
B
|
| question_th |
Q171:
Chapter: - Topic #1
A hospital developed an AI system to provide personalized treatment recommendations for patients. The AI system must provide the rationale behind the recommendations and make the insights accessible to doctors and patients.Which human-centered design principle does this scenario present?
A
Explainability
B
Privacy and security
C
Fairness
D
Data governance
Correct Answer:
A
|
| question_th |
Q172:
Chapter: - Topic #1
Which statement presents an advantage of using Retrieval Augmented Generation (RAG) for natural language processing (NLP) tasks?
A
RAG can use external knowledge sources to generate more accurate and informative responses.
B
RAG is designed to improve the speed of language model training.
C
RAG is primarily used for speech recognition tasks.
D
RAG is a technique for data augmentation in computer vision tasks.
Correct Answer:
A
|
| question_th |
Q173:
Chapter: - Topic #1
A company has created a custom model by fine-tuning an existing large language model (LLM) from Amazon Bedrock. The company wants to deploy the model to production and use the model to handle a steady rate of requests each minute.Which solution meets these requirements MOST cost-effectively?
A
Deploy the model by using an Amazon EC2 compute optimized instance.
B
Use the model with on-demand throughput on Amazon Bedrock.
C
Store the model in Amazon S3 and host the model by using AWS Lambda.
D
Purchase Provisioned Throughput for the model on Amazon Bedrock.
Correct Answer:
D
|
| question_th |
Q174:
Chapter: - Topic #1
Which technique involves training AI models on labeled datasets to adapt the models to specific industry terminology and requirements?
A
Data augmentation
B
Fine-tuning
C
Model quantization
D
Continuous pre-training
Correct Answer:
B
|
| question_th |
Q175:
Chapter: - Topic #1
A company is creating an agent for its application by using Amazon Bedrock Agents. The agent is performing well, but the company wants to improve the agent’s accuracy by providing some specific examples.Which solution meets these requirements?
A
Modify the advanced prompts for the agent to include the examples.
B
Create a guardrail for the agent that includes the examples.
C
Use Amazon SageMaker Ground Truth to label the examples.
D
Run a script in AWS Lambda that adds the examples to the training dataset.
Correct Answer:
A
|
| question_th |
Q176:
Chapter: - Topic #1
Which option is a benefit of using infrastructure as code (IaC) in machine learning operations (MLOps)?
A
IaC eliminates the need for hyperparameter tuning.
B
IaC always provisions powerful compute instances, contributing to the training of more accurate models.
C
IaC streamlines the deployment of scalable and consistent ML workloads in cloud environments.
D
IaC minimizes overall expenses by deploying only low-cost instances.
Correct Answer:
C
|
| question_th |
Q177:
Chapter: - Topic #1
A company wants to fine-tune a foundation model (FM) to answer questions for a specific domain. The company wants to use instruction-based fine-tuning.How should the company prepare the training data?
A
Gather company internal documents and industry-specific materials. Merge the documents and materials into a single file.
B
Collect external company reviews from various online sources. Manually label each review as either positive or negative.
C
Create pairs of questions and answers that specifically address topics related to the company's industry domain.
D
Create few-shot prompts to instruct the model to answer only domain knowledge.
Correct Answer:
C
|
| question_th |
Q178:
Chapter: - Topic #1
Which ML technique ensures data compliance and privacy when training AI models on AWS?
A
Reinforcement learning
B
Transfer learning
C
Federated learning
D
Unsupervised learning
Correct Answer:
C
|
| question_th |
Q179:
Chapter: - Topic #1
A manufacturing company has an application that ingests consumer complaints from publicly available sources. The application uses complex hard-coded logic to process the complaints. The company wants to scale this logic across markets and product lines.Which advantage do generative AI models offer for this scenario?
A
Predictability of outputs
B
Adaptability
C
Less sensitivity to changes in inputs
D
Explainability
Correct Answer:
B
|
| question_th |
Q180:
Chapter: - Topic #1
A financial company wants to flag all credit card activity as possibly fraudulent or non-fraudulent based on transaction data.Which type of ML model meets these requirements?
A
Regression
B
Diffusion
C
Binary classification
D
Multi-class classification
Correct Answer:
C
|
| question_th |
Q181:
Chapter: - Topic #1
A hospital wants to use a generative AI solution with speech-to-text functionality to help improve employee skills in dictating clinical notes.Which AWS service meets these requirements?
A
Amazon Q Developer
B
Amazon Polly
C
Amazon Rekognition
D
AWS HealthScribe
Correct Answer:
D
|
| question_th |
Q182:
Chapter: - Topic #1
Which type of AI model makes numeric predictions?
A
Diffusion
B
Regression
C
Transformer
D
Multi-modal
Correct Answer:
B
|
| question_th |
Q183:
Chapter: - Topic #1
What is the purpose of vector embeddings in a large language model (LLM)?
A
Splitting text into manageable pieces of data
B
Grouping a set of characters to be treated as a single unit
C
Providing the ability to mathematically compare texts
D
Providing the count of every word in the input
Correct Answer:
C
|
| question_th |
Q184:
Chapter: - Topic #1
A company wants to fine-tune a foundation model (FM) by using AWS services. The company needs to ensure that its data stays private, safe, and secure in the source AWS Region where the data is stored.Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
A
Host the model on premises by using AWS Outposts.
B
Use the Amazon Bedrock API.
C
Use AWS PrivateLink and a VPC.
D
Host the Amazon Bedrock API on premises.
E
Use Amazon CloudWatch logs and metrics.
Correct Answer:
B
C
|
| question_th |
Q185:
Chapter: - Topic #1
A financial company uses AWS to host its generative AI models. The company must generate reports to show adherence to international regulations for handling sensitive customer data.Which AWS service meets these requirements?
A
Amazon Macie
B
AWS Artifact
C
AWS Secrets Manager
D
AWS Config
Correct Answer:
B
|
| question_th |
Q186:
Chapter: - Topic #1
A medical company wants to modernize its onsite information processing application. The company wants to use generative AI to respond to medical questions from patients.Which AWS service should the company use to ensure responsible AI for the application?
A
Guardrails for Amazon Bedrock
B
Amazon Inspector
C
Amazon Rekognition
D
AWS Trusted Advisor
Correct Answer:
A
|
| question_th |
Q187:
Chapter: - Topic #1
Which metric is used to evaluate the performance of foundation models (FMs) for text summarization tasks?
A
F1 score
B
Bilingual Evaluation Understudy (BLEU) score
C
Accuracy
D
Mean squared error (MSE)
Correct Answer:
B
|
| question_th |
Q188:
Chapter: - Topic #1
What is the benefit of fine-tuning a foundation model (FM)?
A
Fine-tuning reduces the FM's size and complexity and enables slower inference.
B
Fine-tuning uses specific training data to retrain the FM from scratch to adapt to a specific use case.
C
Fine-tuning keeps the FM's knowledge up to date by pre-training the FM on more recent data.
D
Fine-tuning improves the performance of the FM on a specific task by further training the FM on new labeled data.
Correct Answer:
D
|
| question_th |
Q189:
Chapter: - Topic #1
A company wants to improve its chatbot's responses to match the company's desired tone. The company has 100 examples of high-quality conversations between customer service agents and customers. The company wants to use this data to incorporate company tone into the chatbot's responses.Which solution meets these requirements?
A
Use Amazon Personalize to generate responses.
B
Create an Amazon SageMaker HyperPod pre-training job.
C
Host the model by using Amazon SageMaker. Use TensorRT for large language model (LLM) deployment.
D
Create an Amazon Bedrock fine-tuning job.
Correct Answer:
D
|
| question_th |
Q190:
Chapter: - Topic #1
An ecommerce company is using a chatbot to automate the customer order submission process. The chatbot is powered by AI and is available to customers directly from the company's website 24 hours a day, 7 days a week.Which option is an AI system input vulnerability that the company needs to resolve before the chatbot is made available?
A
Data leakage
B
Prompt injection
C
Large language model (LLM) hallucinations
D
Concept drift
Correct Answer:
B
|
| question_th |
Q191:
Chapter: - Topic #1
A social media company wants to prevent users from posting discriminatory content on the company's application. The company wants to use Amazon Bedrock as part of the solution.How can the company use Amazon Bedrock to meet these requirements?
A
Give users the ability to interact based on user preferences.
B
Block interactions related to predefined topics.
C
Restrict user conversations to predefined topics.
D
Provide a variety of responses to select from for user engagement.
Correct Answer:
B
|
| question_th |
Q192:
Chapter: - Topic #1
An education company waftion. The application will give users the ability to enter text or provide a picture of a question. The application will respond with a written answer and an explanation of the written answer.Which model type meets these requirements?
A
Computer vision model
B
Large multi-modal language model
C
Diffusion model
D
Text-to-speech model
Correct Answer:
B
|
| question_th |
Q193:
Chapter: - Topic #1
In which stage of the generative AI model lifecycle are tests performed to examine the model's accuracy?
A
Deployment
B
Data selection
C
Fine-tuning
D
Evaluation
Correct Answer:
D
|
| question_th |
Q194:
Chapter: - Topic #1
Which statement correctly describes embeddings in generative AI?
A
Embeddings represent data as high-dimensional vectors that capture semantic relationships.
B
Embeddings is a technique that searches data to find the most helpful information to answer natural language questions.
C
Embeddings reduce the hardware requirements of a model by using a less precise data type for the weights and activations.
D
Embeddings provide the ability to store and retrieve data for generative AI applications.
Correct Answer:
A
|
| question_th |
Q195:
Chapter: - Topic #1
A company wants to add generative AI functionality to its application by integrating a large language model (LLM). The responses from the LLM must be as deterministic and as stable as possible.Which solution meets these requirements?
A
Configure the application to automatically set the temperature parameter to 0 when submitting the prompt to the LLM.
B
Configure the application to automatically add "make your response deterministic" at the end of the prompt before submitting the prompt to the LLM.
C
Configure the application to automatically add "make your response deterministic" at the beginning of the prompt before submitting the prompt to the LLM.
D
Configure the application to automatically set the temperature parameter to 1 when submitting the prompt to the LLM.
Correct Answer:
A
|
| question_th |
Q196:
Chapter: - Topic #1
A company needs to select a generative AI model to build an application. The application must provide responses to users in real time.Which model characteristic should the company consider to meet these requirements?
A
Model complexity
B
Innovation speed
C
Inference speed
D
Training time
Correct Answer:
C
|
| question_th |
Q197:
Chapter: - Topic #1
Which term refers to the instructions given to foundation models (FMs) so that the FMs provide a more accurate response to a question?
A
Prompt
B
Direction
C
Dialog
D
Translation
Correct Answer:
A
|
| question_th |
Q198:
Chapter: - Topic #1
A retail company wants to build an ML model to recommend products to customers. The company wants to build the model based on responsible practices.Which practice should the company apply when collecting data to decrease model bias?
A
Use data from only customers who match the demographics of the company's overall customer base.
B
Collect data from customers who have a past purchase history.
C
Ensure that the data is balanced and collected from a diverse group.
D
Ensure that the data is from a publicly available dataset.
Correct Answer:
C
|
| question_th |
Q199:
Chapter: - Topic #1
A company is developing an ML model to predict customer churn.Which evaluation metric will assess the model's performance on a binary classification task such as predicting churn?
A
F1 score
B
Mean squared error (MSE)
C
R-squared
D
Time used to train the model
Correct Answer:
A
|
| question_th |
Q200:
Chapter: - Topic #1
An AI practitioner is evaluating the performance of an Amazon SageMaker model. The AI practitioner must choose a performance metric. The metric must show the ratio of the number of correctly classified items to the total number of correctly and incorrectly classified items.Which metric meets these requirements?
A
Accuracy
B
Precision
C
F1 score
D
Recall
Correct Answer:
A
|
| question_th |
Q201:
Chapter: - Topic #1
An ecommerce company receives multiple gigabytes of customer data daily. The company uses the data to train an ML model to forecast future product demand. The company needs a solution to perform inferences once each day.Which inference type meets these requirements?
A
Batch inference
B
Asynchronous inference
C
Real-time inference
D
Serverless inference
Correct Answer:
A
|
| question_th |
Q202:
Chapter: - Topic #1
A company has developed a generative AI model for customer segmentation. The model has been deployed in the company's production environment for a long time. The company recently noticed some inconsistency in the model's responses. The company wants to evaluate model bias and drift.Which AWS service or feature meets these requirements?
A
Amazon SageMaker Model Monitor
B
Amazon SageMaker Clarify
C
Amazon SageMaker Model Cards
D
Amazon SageMaker Feature Store
Correct Answer:
A
|
| question_th |
Q203:
Chapter: - Topic #1
A company has signed up for Amazon Bedrock access to build applications. The company wants to restrict employee access to specific models available on Amazon Bedrock.Which solution meets these requirements?
A
Use AWS Identity and Access Management (IAM) policies to restrict model access.
B
Use AWS Security Token Service (AWS STS) to generate temporary credentials for model use.
C
Use AWS Identity and Access Management (IAM) service roles to restrict model subscription.
D
Use Amazon Inspector to monitor model access.
Correct Answer:
A
|
| question_th |
Q204:
Chapter: - Topic #1
Which ML technique uses training data that is labeled with the correct output values?
A
Supervised learning
B
Unsupervised learning
C
Reinforcement learning
D
Transfer learning
Correct Answer:
A
|
| question_th |
Q205:
Chapter: - Topic #1
Which large language model (LLM) parameter controls the number of possible next words or tokens considered at each step of the text generation process?
A
Maximum tokens
B
Top K
C
Temperature
D
Batch size
Correct Answer:
B
|
| question_th |
Q206:
Chapter: - Topic #1
A company is making a chatbot. The chatbot uses Amazon Lex and Amazon OpenSearch Service. The chatbot uses the company's private data to answer questions. The company needs to convert the data into a vector representation before storing the data in a database.Which type of foundation model (FM) meets these requirements?
A
Text completion model
B
Instruction following model
C
Text embeddings model
D
Image generation model
Correct Answer:
C
|
| question_th |
Q207:
Chapter: - Topic #1
A company wants to use a large language model (LLM) to generate product descriptions. The company wants to give the model example descriptions that follow a format.Which prompt engineering technique will generate descriptions that match the format?
A
Zero-shot prompting
B
Chain-of-thought prompting
C
One-shot prompting
D
Few-shot prompting
Correct Answer:
D
|
| question_th |
Q208:
Chapter: - Topic #1
A bank is fine-tuning a large language model (LLM) on Amazon Bedrock to assist customers with questions about their loans. The bank wants to ensure that the model does not reveal any private customer data.Which solution meets these requirements?
A
Use Amazon Bedrock Guardrails.
B
Remove personally identifiable information (PII) from the customer data before fine-tuning the LLM.
C
Increase the Top-K parameter of the LLM.
D
Store customer data in Amazon S3. Encrypt the data before fine-tuning the LLM.
Correct Answer:
B
|
| question_th |
Q209:
Chapter: - Topic #1
A grocery store wants to create a chatbot to help customers find products in the store. The chatbot must check the inventory in real time and provide the product location in the store.Which prompt engineering technique should the store use to build the chatbot?
A
Zero-shot prompting
B
Few-shot prompting
C
Least-to-most prompting
D
Reasoning and acting (ReAct) prompting
Correct Answer:
D
|
| question_th |
Q210:
Chapter: - Topic #1
A company uses a third-party model on Amazon Bedrock to analyze confidential documents. The company is concerned about data privacy.Which statement describes how Amazon Bedrock protects data privacy?
A
User inputs and model outputs are anonymized and shared with third-party model providers.
B
User inputs and model outputs are not shared with any third-party model providers.
C
User inputs are kept confidential, but model outputs are shared with third-party model providers.
D
User inputs and model outputs are redacted before the inputs and outputs are shared with third-party model providers.
Correct Answer:
B
|
| question_th |
Q211:
Chapter: - Topic #1
An animation company wants to provide subtitles for its content.Which AWS service meets this requirement?
A
Amazon Comprehend
B
Amazon Polly
C
Amazon Transcribe
D
Amazon Translate
Correct Answer:
C
|
| question_th |
Q212:
Chapter: - Topic #1
An ecommerce company wants to group customers based on their purchase history and preferences to personalize the user experience of the company's application.Which ML technique should the company use?
A
Classification
B
Clustering
C
Regression
D
Content generation
Correct Answer:
B
|
| question_th |
Q213:
Chapter: - Topic #1
A company wants to control employee access to publicly available foundation models (FMs).Which solution meets these requirements?
A
Analyze cost and usage reports in AWS Cost Explorer.
B
Download AWS security and compliance documents from AWS Artifact.
C
Configure Amazon SageMaker JumpStart to restrict discoverable FMs.
D
Build a hybrid search solution by using Amazon OpenSearch Service.
Correct Answer:
C
|
| question_th |
Q214:
Chapter: - Topic #1
A company has set up a translation tool to help its customer service team handle issues from customers around the world. The company wants to evaluate the performance of the translation tool. The company sets up a parallel data process that compares the responses from the tool to responses from actual humans. Both sets of responses are generated on the same set of documents.Which strategy should the company use to evaluate the translation tool?
A
Use the Bilingual Evaluation Understudy (BLEU) score to estimate the absolute translation quality of the two methods.
B
Use the Bilingual Evaluation Understudy (BLEU) score to estimate the relative translation quality of the two methods.
C
Use the BERTScore to estimate the absolute translation quality of the two methods.
D
Use the BERTScore to estimate the relative translation quality of the two methods.
Correct Answer:
B
|
| question_th |
Q215:
Chapter: - Topic #1
An AI practitioner wants to generate more diverse and more creative outputs from a large language model (LLM).How should the AI practitioner adjust the inference parameter?
A
Increase the temperature value.
B
Decrease the Top K value.
C
Increase the response length.
D
Decrease the prompt length.
Correct Answer:
A
|
| question_th |
Q216:
Chapter: - Topic #1
A company has developed custom computer vision models. The company needs a user-friendly interface for data labeling to minimize model mistakes on new real-world data.Which AWS service, feature, or tool meets these requirements?
A
Amazon SageMaker Ground Truth
B
Amazon SageMaker Canvas
C
Amazon Bedrock playground
D
Amazon Bedrock Agents
Correct Answer:
A
|
| question_th |
Q217:
Chapter: - Topic #1
A company is integrating AI into its employee recruitment and hiring solution. The company wants to mitigate bias risks and ensure responsible AI practices while prioritizing equitable hiring decisions.Which core dimensions of responsible AI should the company consider? (Choose two.)
A
Fairness
B
Tolerance
C
Flexibility
D
Open source
E
Transparency
Correct Answer:
A
E
|
| question_th |
Q218:
Chapter: - Topic #1
A financial company has deployed an ML model to predict customer churn. The model has been running in production for 1 week. The company wants to evaluate how accurately the model predicts churn compared to actual customer behavior.Which metric meets these requirements?
A
Root mean squared error (RMSE)
B
Return on investment (ROI)
C
F1 score
D
Bilingual Evaluation Understudy (BLEU) score
Correct Answer:
C
|
| question_th |
Q219:
Chapter: - Topic #1
A company has a generative AI application that uses a pre-trained foundation model (FM) on Amazon Bedrock. The company wants the FM to include more context by using company information.Which solution meets these requirements MOST cost-effectively?
A
Use Amazon Bedrock Knowledge Bases.
B
Choose a different FM on Amazon Bedrock.
C
Use Amazon Bedrock Agents.
D
Deploy a custom model on Amazon Bedrock.
Correct Answer:
A
|
| question_th |
Q220:
Chapter: - Topic #1
A food service company wants to collect a dataset to predict customer food preferences. The company wants to ensure that the food preferences of all demographics are included in the data.Which dataset characteristic does this scenario present?
A
Accuracy
B
Diversity
C
Recency bias
D
Reliability
Correct Answer:
B
|
| question_th |
Q221:
Chapter: - Topic #1
A company wants to create a chatbot that answers questions about human resources policies. The company is using a large language model (LLM) and has a large digital documentation base.Which technique should the company use to optimize the generated responses?
A
Use Retrieval Augmented Generation (RAG).
B
Use few-shot prompting.
C
Set the temperature to 1.
D
Decrease the token size.
Correct Answer:
A
|
| question_th |
Q222:
Chapter: - Topic #1
An education company is building a chatbot whose target audience is teenagers. The company is training a custom large language model (LLM). The company wants the chatbot to speak in the target audience's language style by using creative spelling and shortened words.Which metric will assess the LLM's performance?
A
F1 score
B
BERTScore
C
Recall-Oriented Understudy for Gisting Evaluation (ROUGE)
D
Bilingual Evaluation Understudy (BLEU) score
Correct Answer:
B
|
| question_th |
Q223:
Chapter: - Topic #1
A customer service team is developing an application to analyze customer feedback and automatically classify the feedback into different categories. The categories include product quality, customer service, and delivery experience.Which A1 concept does this scenario present?
A
Computer vision
B
Natural language processing (NLP)
C
Recommendation systems
D
Fraud detection
Correct Answer:
B
|