TASK 7.1: Retrieve previous experiment results (session/Qiskit Runtime)¶
- Which statement best describes how to retrieve a past Qiskit Runtime job in a new Python session?
a. Use QiskitRuntimeService() and call service.job(<JOB_ID>).result() after obtaining the job ID.
b. Recreate the same circuits and options, then call sampler.run(...).result() without any ID.
c. Call backend.retrieve_job(<JOB_ID>) from qiskit_aer to get the primitive result.
d. Jobs cannot be retrieved after the kernel restarts; results must be recomputed every time.
answer
The answer is a.
You can reconnect to previously submitted Runtime jobs by constructing a QiskitRuntimeService instance and calling service.job(job_id).result(). The job ID can be obtained from job.job_id() at submission time or looked up later via service.jobs(...) or the IBM Quantum Workloads page. :::
- You want to programmatically find jobs run in roughly the last three months to resume analysis. Which approach is correct?
a. service.jobs(created_after=datetime.now()-timedelta(days=90)) to list recent jobs and then select the ones you need.
b. service.jobs(last_three_months=True) to let the service infer the period automatically.
c. service.list_recent_jobs(90_days=True) and then call .result() on the list.
d. service.jobs() always returns only the last 7 days, so older jobs cannot be listed.
answer
The answer is a.
QiskitRuntimeService.jobs accepts filters such as created_after (a datetime.datetime object). You can compute a boundary like “now − 90 days”, list those jobs, and then retrieve any job by ID to get its results. :::
- While reviewing an iterative workflow executed in a Runtime session, you want to retrieve only the jobs that were submitted inside that session. What is the most appropriate approach?
a. Filter with service.jobs(session_id=<SESSION_ID>) (or equivalent filter) to return jobs that belong to the session, then retrieve the desired job by ID.
b. Sessions do not group jobs, so you must search by backend only.
c. Call Session(<backend>).jobs() without any session identifier; it will infer the old session automatically.
d. There is no way to filter by session; you must manually scan all jobs one by one.
answer
The answer is a.
Runtime sessions group iterative jobs. When querying with service.jobs(...), you can filter by attributes including session (as documented), then call service.job(job_id) to fetch a specific job. This lets you focus on jobs created within that past session. :::
- After you retrieve a
RuntimeJobobject viaservice.job(job_id), what does calling.result()typically return for primitive jobs?
a. A PrimitiveResult object (e.g., containing PubResult entries with data and metadata).
b. A raw dictionary of bitstrings to counts for every circuit, identical to AerSimulator.get_counts().
c. An ExperimentData object exclusively used by Qiskit Experiments.
d. A CSV string that must be parsed manually to access expectation values.
answer
The answer is a.
For Sampler/Estimator primitive jobs, job.result() returns a PrimitiveResult object that contains published results (e.g., expectation values) and metadata (shots, resilience options, etc.). :::
- You want to persist results to disk and later load them in a clean kernel without re-querying the service. What is the recommended method?
a. Serialize with Python’s json.dump(..., cls=RuntimeEncoder) and later load with json.load(..., cls=RuntimeDecoder).
b. Use pickle.dump(result) and pickle.load(result) because the result is always picklable across library versions.
c. Save with numpy.savez(result) because primitive results are NumPy arrays only.
d. Call result.to_csv("result.csv") because all primitive results implement a CSV export method.
answer
The answer is a.
The guide recommends JSON serialization with Qiskit Runtime’s custom RuntimeEncoder and RuntimeDecoder so complex result types (e.g., PrimitiveResult, PubResult) can be safely stored and restored in another session.
- Which statement about job IDs is most accurate for retrieving results later?
a. You must keep or re-discover the job ID; retrieval later requires service.job(<JOB_ID>) before calling .result().
b. Job IDs are optional; the service can always infer the correct job from the circuit metadata alone.
c. Job IDs expire immediately upon completion, so you cannot use them after the job finishes.
d. Job IDs can be reconstructed deterministically from the circuit hash and the backend name.
answer
The answer is a.
The documentation emphasizes that retrieving results later relies on the job ID. Save it at submission time (job.job_id()) or look it up on the Workloads page or with service.jobs(...).
- Regarding interim results of a Runtime job, which of the following is correct?
a. Interim results, if available, can be queried, but the service keeps them only for a limited time after the job finishes (e.g., a couple of days).
b. Interim results are stored indefinitely and are guaranteed retrievable at any time.
c. Interim results are never accessible after submission; only final results are returned.
d. Interim results are persisted inside result.json automatically when you call RuntimeEncoder.
answer
The answer is a.
Per the API reference, interim results are available but retained only for a short window after job completion (e.g., two days). You should stream or retrieve them promptly if you need them for later analysis.
- You previously ran a set of jobs with the
Samplerprimitive under a session and tagged them with"opt-phase-1". To retrieve only those jobs now, which call is the best starting point?
a. service.jobs(program_id="sampler", tags=["opt-phase-1"]) to list matching jobs, then pick IDs to fetch results.
b. service.get_jobs_by_tag("opt-phase-1", primitive="sampler") which returns results directly.
c. service.jobs(filter="tag:opt-phase-1 AND primitive:sampler") using a Lucene-like string.
d. service.jobs() and manually scan all job metadata in Python for the tag value.
answer
The answer is a.
QiskitRuntimeService.jobs supports filtering by attributes such as program_id and (depending on client version) tags. Listing first and then retrieving by ID is the standard pattern for resuming from tagged workloads. :::
TASK 7.2: Monitor jobs¶
- Which method allows you to monitor a job’s execution status directly in a Jupyter notebook?
a. job.result()
b. job.wait_for_final_state()
c. qiskit_ibm_runtime.job_monitor(job)
d. job.stream_results()
answer
The answer is c.
The job_monitor utility provides a real-time view of job progress in Jupyter notebooks, displaying queue position and state changes until completion.
- When calling
job.wait_for_final_state(), what happens if the job is still queued or running?
a. It raises an error immediately.
b. It blocks until the job reaches a terminal state (such as DONE, ERROR, or CANCELLED).
c. It prints logs continuously until the job finishes.
d. It returns partial results at intervals.
answer
The answer is b.
wait_for_final_state() suspends execution until the job is completed, failed, or cancelled. This is useful in scripts where interactive monitoring is not required.
- Which of the following states indicate that a Qiskit Runtime job has finished successfully?
a. RUNNING
b. QUEUED
c. DONE
d. INITIALIZING
answer
The answer is c.
Job states such as QUEUED, INITIALIZING, or RUNNING are intermediate, while DONE is the final state indicating successful completion.
- How can you retrieve and print the current status of a submitted job?
a. print(job.status())
b. print(job.state)
c. print(job.result().status)
d. print(service.job_status(job_id))
answer
The answer is a.
job.status() returns the job’s status as an JobStatus object, which can be printed or checked against specific states. :::
- In what scenario would you use
job.stream_results()?
a. To cancel a job while it is queued.
b. To receive interim results as they become available during job execution.
c. To monitor system backend logs in real time.
d. To stream the job’s result object to a file.
answer
The answer is b.
stream_results() enables users to fetch interim results from a job before final completion, useful for iterative algorithms or monitoring progress. :::
- Which of the following statements about monitoring jobs in IBM Quantum is correct?
a. The only way to check job progress is by waiting for results.
b. Jobs expose both blocking (wait_for_final_state) and interactive (job_monitor) methods for monitoring.
c. Once a job is submitted, you cannot check its queue position.
d. Monitoring jobs requires exporting logs manually from the dashboard.
answer
The answer is b.
IBM Quantum provides both interactive monitoring via job_monitor and blocking monitoring with wait_for_final_state, giving flexibility for different workflows. :::
- Which job state typically appears before a job transitions to
RUNNING?
a. QUEUED
b. DONE
c. ERROR
d. CANCELLED
answer
The answer is a.
The job lifecycle often transitions from QUEUED → RUNNING → DONE. Errors or cancellations can occur at any stage. :::
- If you want to monitor multiple jobs submitted in a batch, what is a recommended approach?
a. Call job_monitor() on the entire list at once.
b. Loop through each job in the list and apply job_monitor(job) individually.
c. Use service.monitor_all() to monitor all jobs.
d. Batch jobs cannot be monitored programmatically.
answer
The answer is b.
To monitor multiple jobs, iterate through the list of job objects and apply monitoring functions like job_monitor(job) or check their statuses via job.status().