Monitor streaming profile ingestion
You can use the monitoring dashboard in the ÃÛ¶¹ÊÓÆµ Experience Platform UI to conduct real-time monitoring of streaming profile ingestion within your organization. Use this feature to access greater transparency into throughput, latency, and data quality metrics related to your streaming data. Additionally, use this feature for proactive alerting and the retrieval of actionable insights to help identify potential capacity violations and data ingestion issues.
Read the following guide to learn how to use the monitoring dashboard to track rates and metrics for streaming profile ingestion jobs in your organization.
Get started
This guide requires a working understanding of the following components of Experience Platform:
- Dataflows: Dataflows represent data jobs that transfer information across Experience Platform. They are configured across various services to facilitate the movement of data from source connectors to target datasets, as well as to Identity Service, Real-Time Customer Profile, and Destinations.
- Real-Time Customer Profile: Real-Time Customer Profile combines data from multiple sources—online, offline, CRM, and third-party—into a single, actionable view of each customer, enabling consistent and personalized experiences across all touch points.
- Streaming ingestion: Streaming ingestion for Experience Platform provides users a method to send data from client and server-side devices to Experience Platform in real-time.Experience Platform enables you to drive coordinated, consistent, and relevant experiences by generating a Real-Time Customer Profile for each of your individual customers. ​Streaming ingestion plays a key role in building these profiles with as little latency as possible.
- Capacities: In Experience Platform, capacities let you know if your organization has exceeded any of your guardrails and gives you information on how to fix these issues.
Monitoring metrics for streaming profile ingestion streaming-profile-metrics
Use the metrics table for information specific to your dataflows. Refer to the following table for details on each column.
- Sandbox/Dataflow
- Dataflow run
- Sandbox/Dataflow
- Dataflow run
- Sandbox/Dataflow: Real-time monitoring with a data refresh every 60 seconds.
- Dataflow run: Grouped in 15 minutes.
- Sandbox/Dataflow
- Dataflow run
- Sandbox/Dataflow: Real-time monitoring with a data refresh every 60 seconds.
- Dataflow run: Grouped in 15 minutes.
- Sandbox/Dataflow
- Dataflow run
- Sandbox/Dataflow: Real-time monitoring with a data refresh every 60 seconds.
- Dataflow run: Grouped in 15 minutes.
Use the monitoring dashboard for streaming profile ingestion
To access the monitoring dashboard for streaming profile ingestion, go to the Experience Platform UI, select Monitoring from the left-navigation and then select Streaming end-to-end.
Refer to the top-header of the dashboard for the Profile metrics card. Use this display to view information on the records ingested, failed, and skipped, as well as information on the current status of request throughput and latency.
Next, use the interface to view detailed information on your streaming profile ingestion metrics. Use calendar feature to toggle between different timeframes. You can select from following pre-configured time windows:
- Last 6 hours
- Last 12 hours
- Last 24 hours
- Last 7 days
- Last 30 days
Alternatively, you can manually configure your own timeframe using the calendar.
You can use three different metric categories in the monitoring dashboard for streaming profile ingestion: Throughput, Ingestion, and Latency.
Select Throughput to view information on the amount of data that Experience Platform is processing given a configured period of time. Refer to this metric to evaluate the efficiency and capacity of your system.
-
Capacity: The maximum amount of data that your sandbox can process under defined conditions.
-
Request throughput: The rate at which events are received by the ingestion system, measured in events per second.
-
Processing throughput: The rate at which the system successfully ingests and processes incoming event payloads, measured in events per second.
Ingestion: Select Ingestion to view information on the ingestion jobs in your sandbox. These ingestion jobs are measured in three different metrics.
-
Records ingested: The total amount of records created within a given time period. This metric represents successful data ingestion processes in your sandbox.
-
Records skipped: The total number of records that did not get ingested due to errors.
-
Records skipped: The total number of records that were dropped due to violation of capacity limits.
Select Latency to view information on the amount of time it takes Experience Platform to respond to a request or complete an operation within a given time period.
Use the dataflow metrics table
The dataflow table lists all streaming ingestion activities with their corresponding set of metrics for Real-Time Customer Profile. Each dataflow is listed with it’s corresponding dataset.
If you are approaching the limits of your sandbox-level capacity, you can refer to the Max throughput column to identify any existing dataflows that are contributing to your consumption rates. Read the best practices section for more information on dataflow management best practices.
To monitor the data that is being ingested in a specific dataflow, select the filter icon
Next, use the dataflow metrics interface to select the specific flow run that you want to inspect. Select the filter icon
Dataflow runs represent an instance of dataflow execution. For example, if a dataflow is scheduled to run hourly at 9:00 AM, 10:00 AM, and 11:00 AM, then you would have three instances of a flow run. Flow runs are specific to your particular organization.
Use the dataflow run details page to view metrics and information of your selected run iteration.
Dataflow management best practices best-practices
Read the following section for information on how to best manage your dataflows and optimize your data consumption on Experience Platform.
Evaluate and optimize streaming ingestion dataflows
To ensure efficient streaming ingestion, review and adjust your dataflows and processing strategy:
- Assess current usage: Identify which dataflows and datasets are contributing most to throughput.
- Prioritize valuable data: Not all data may be necessary. Exclude data that doesn’t support your use cases to reduce storage and improve efficiency.
- Optimize processing mode: Determine if some data can be shifted from streaming to batch ingestion. Reserve streaming for use cases that require low latency, such as real-time segmentation.
Plan for capacity and seasonal traffic
If your current limit of 1,500 events per second is insufficient, consider optimizing your data strategy or increasing your license capacity:
- Analyze dataset and sandbox usage: Review both current and historical data to understand how traffic and engagement impact streaming segmentation throughput.
- Account for seasonality: Identify peak traffic periods driven by recurring marketing campaigns or industry-specific cycles.
- Forecast future demand: Estimate upcoming traffic and engagement volumes based on past seasonal trends, planned campaigns, or major events.
Ingest only data that is required for your use cases. Ensure that you filter out unnecessary data.
- ÃÛ¶¹ÊÓÆµ Analytics: Use row-level filtering to optimize your data intake.
- Sources: Use the Flow Service API to filter row-level data for supported sources like Snowflake and Google BigQuery. Edge datastream : Configure dynamic datastreams to perform row-level filtering of traffic coming in from WebSDK.
Frequently asked questions faq
Read this section for answers to frequently asked questions about the monitoring for streaming profile ingestion.
Why do my metrics look different between the Capacity and Monitoring dashboards for request throughput?
The Monitoring dashboard shows real-time metrics for ingestion and processing. These numbers are exact metrics recorded at the time of activity. Conversely, the Capacity dashboard uses a smoothing mechanism for throughput capacity calculation. This mechanism helps reduce short-lived spikes from instantly qualifying as violations and ensures that capacity alerts focus on sustained trends, rather than momentary bursts.
Due to the smoothing mechanism, you may notice:
- Small spikes in Monitoring that do not appear in Capacity.
- Slightly lower values in Capacity compared to Monitoring at the same timestamp.
The two dashboards are accurate, but are designed for different purposes.
- Monitoring: Detailed, moment-by-moment operational visibility.
- Capacity: Strategic view for identifying usage and violation patterns.
Next steps next-steps
By following this tutorial, you learned how to monitor streaming profile ingestion jobs in your organization. Read the following documents for additional information on monitoring data for Real-Time Customer Profile.