0% found this document useful (0 votes)
635 views5 pages

1Z0-1127-24 Exam Q&A Guide

Uploaded by

muzam36
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
635 views5 pages

1Z0-1127-24 Exam Q&A Guide

Uploaded by

muzam36
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Questions & Answers PDF Page 1

Oracle
1Z0-1127-24 Exam
Oracle Cloud Infrastructure 2024 Generative AI Professional

[Link]
Questions & Answers PDF Page 2

Version: 4.0

Question: 1

In LangChain, which retriever search type is used to balance between relevancy and diversity?

A. top k

B. mmr

C. similarity_score_threshold

D. similarity

Answer: D

Question: 2

What does a dedicated RDMA cluster network do during model fine-tuning and inference?

A. It leads to higher latency in model inference.

B. It enables the deployment of multiple fine-tuned models.

C. It limits the number of fine-tuned model deployable on the same GPU cluster.

D. It increases G PU memory requirements for model deployment.

Answer: B

Question: 3

Which role docs a "model end point" serve in the inference workflow of the OCI Generative AI service?

[Link]
Questions & Answers PDF Page 3

A. Hosts the training data for fine-tuning custom model

B. Evaluates the performance metrics of the custom model

C. Serves as a designated point for user requests and model responses

D. Updates the weights of the base model during the fine-tuning process

Answer: A

Question: 4

Which is a distinguishing feature of "Parameter-Efficient Fine-tuning (PEFT)" as opposed to classic Tine-


tuning" in Large Language Model training?

A. PEFT involves only a few or new parameters and uses labeled, task-specific data.

B. PEFT modifies all parameters and uses unlabeled, task-agnostic data.

C. PEFT does not modify any parameters but uses soft prompting with unlabeled data. PEFT modifies

D. PEFT parameters and b typically used when no training data exists.

Answer: A

Question: 5

How does the Retrieval-Augmented Generation (RAG) Token technique differ from RAG Sequence when
generating a model's response?

A. Unlike RAG Sequence, RAG Token generates the entire response at once without considering
individual parts.

B. RAG Token does not use document retrieval but generates responses based on pre-existing knowledge
only.

C. RAG Token retrieves documents oar/at the beginning of the response generation and uses those for
the entire content

D. RAG Token retrieves relevant documents for each part of the response and constructs the answer
incrementally.

[Link]
Questions & Answers PDF Page 4

Answer: C

[Link]
Questions & Answers PDF Page 5

Thank You for trying 1Z0-1127-24 PDF Demo

To try our 1Z0-1127-24 practice exam software visit link below

[Link]

Start Your 1Z0-1127-24 Exam Preparation

[Limited Time Offer] Use Coupon “20OFF” for special 20% discount on
your purchase. Test your 1Z0-1127-24 preparation with actual exam
questions.

[Link]

You might also like