Exam Number: DP-700 | Length of test: 120 mins |
Exam Name: Implementing Data Engineering Solutions Using Microsoft Fabric | Number of questions in the actual exam: 67 |
Format: PDF, VPLUS | Passing Score: 700/1000 |
Download practice test questions
Title | Size | Hits | Download |
---|---|---|---|
Microsoft.Premium.DP-700.67q - DEMO | 4.80 MB | 39 | Download |
Microsoft.DP-700.By.Udo.26q | 973.51 KB | 39 | Download |
Study Guide for Exam DP-700: Implementing Data Engineering Solutions Using Microsoft Fabric
Audience profile
As a candidate for this exam, you should have subject matter expertise with data loading patterns, data architectures, and orchestration processes. Your responsibilities for this role include:
- Ingesting and transforming data.
- Securing and managing an analytics solution.
- Monitoring and optimizing an analytics solution.
You work closely with analytics engineers, architects, analysts, and administrators to design and deploy data engineering solutions for analytics.
You should be skilled at manipulating and transforming data by using Structured Query Language (SQL), PySpark, and Kusto Query Language (KQL).
Skills at a glance
Implement and manage an analytics solution (30–35%)
- Configure Microsoft Fabric workspace settings
- Implement lifecycle management in Fabric
- Configure security and governance
- Orchestrate processes
Ingest and transform data (30–35%)
- Design and implement loading patterns
- Ingest and transform batch data
- Ingest and transform streaming data
Monitor and optimize an analytics solution (30–35%)
- Monitor Fabric items
- Identify and resolve errors
- Optimize performance
Some new sample questions:
Question:
You have an Azure event hub. Each event contains the following fields:
BikepointID
Street
Neighbourhood
Latitude
Longitude
No_Bikes
No_Empty_Docks
You need to ingest the events. The solution must only retain events that have a Neighbourhood value of Chelsea, and then store the retained events in a Fabric lakehouse.
What should you use?
A. a KQL queryset
B. an eventstream
C. a streaming dataset
D. Apache Spark Structured Streaming
Question:
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Fabric eventstream that loads data into a table named Bike_Location in a KQL database. The table contains the following columns:
You need to apply transformation and filter logic to prepare the data for consumption. The solution must return data for a neighbourhood named Sands End when No_Bikes is at least 15. The results must be ordered by No_Bikes in ascending order.
Solution: You use the following code segment:
Does this meet the goal?
A. Yes
B. no
Question:
You have a Fabric workspace that contains a semantic model named Model1.
You need to dynamically execute and monitor the refresh progress of Model1.
What should you use?
A. dynamic management views in Microsoft SQL Server Management Studio
B. Monitoring hub
C. dynamic management views in Azure Data Studio
D. a semantic link in a notebook
Question:
You have a Fabric workspace that contains a warehouse named DW1. DW1 is loaded by using a notebook named Notebook1.
You need to identify which version of Delta was used when Notebook1 was executed.
What should you use?
A. Real-Time hub
B. OneLake data hub
C. the Admin monitoring workspace
D. Fabric Monitor
E. the Microsoft Fabric Capacity Metrics app
…………