ÃÛ¶¹ÊÓÆµ

Intelligently re-engage your customers: Luma examples

Learn how ÃÛ¶¹ÊÓÆµ adapted the Intelligent Re-engagement use case to work with the Luma demo site, building on the foundation implementation documented in the Data Architect and Data Engineer tutorial and the Experience Platform Web SDK tutorial.

Implementation

Transcript

So how can you execute the intelligent re-engagement use case at your company? I’m Daniel Wright, technical marketing engineer. And then a series of videos. We’ll walk you through an example of implementation and execution using our fictional brand Luma. This video is for the developers doing the implementation work, and the other videos will show marketers how to do their part in real time customer data platform and Journey Optimizer. So let’s dive in. The ÃÛ¶¹ÊÓÆµ Experience platform components that we’ll use in our implementation are schemas, identities, data sets, data ingestion and profiles. Let’s make sure we understand the business requirements and then transition into our data model and schemas. So we have three scenarios abandoned Browse in which a customer browses a piece of content but doesn’t take the next step and you want to reengage them. We’re going to showcase this in a retail scenario, but you can apply this to almost any industry. Maybe you’re a financial institution and somebody reads about your auto loans but doesn’t apply for one. Next is abandoned cart. In this one, a customer adds a product to their shopping cart but doesn’t complete the purchase. Finally, we have an order confirmation scenario which is sending a message after a purchase or other conversion event. So what data do we need to pull this off? I’m going to break this problem down into three themes ID, qualification and messaging. Okay. So let’s start with ID. Who is this user so we can message them? They need to be what we call a known user. We need to be able to connect online behavior to an email address or mobile phone number. Typically, you would get this information by having an account creation process, and they occasionally log in to your website or mobile app for in-store purchases. It’s a good idea to have an incentive to self identify, say, by entering loyalty program details at checkout. Qualification is about what makes them eligible. In our use case, there are different events used in this scenarios like viewing a product, adding it to their shopping cart or purchasing something accompanying these events. You need things like timestamps because the passage of time is another important element of qualification. Finally, we have messaging. What message do you want to show them? If you want to show them the actual products they browsed, abandoned or purchased, then you need to collect SKUs, product names, images or things like that. Also, what channel will you use to message them? What are their communication preferences? So now that we know the scenarios and the data points they require, we can build an entity relationship diagram of our data model.

Here are the three main schemas mentioned in the use case. Customer attributes which uses the axiom individual profile class. Since these are attributes of the customer or customer digital transactions which uses the EKM Experience event class. Since this captures actions that a customer’s taken on the website or mobile app over a period of time, and customer offline transactions which also uses the ZM experience event class again, because these are actions that customers have taken. So here’s the thing you don’t need to use these exact schemas, you just need to collect the right data for the use case. For example, here’s our data model for LUMA. Now I like to have separate schemas for each data source. It might look more complex, but I find it more intuitive. Your data model is probably going to be even more elaborate because you’re going to onboard data from additional sources and need additional fields for additional use cases. The important thing is that you’re collecting the right data in the right format. By the right format, I mean use the experience event class for event data like the product use and purchases and use the Axiom individual profile class for attribute data like email addresses and consent preferences. Use the right data types and field properties to help ensure data integrity. For the most part, I’m using the same field groups that were outlined in the use case document.

Now let’s talk about identities and profiles. We want to use data from our website or mobile App store purchases and then systems with attribute data about our customers like CRM or loyalty. We’re going to stitch the data from these sources together to build real time customer profiles. Identity fields and the identity graph are What do this? Let’s take a look at how we collect these with Luma. Starting with this CRM system, Our CRM system uses a CRM idea as the primary identity and you can see how it’s labeled as such in the schema. Note that my email and phone number fields are also in this schema and I don’t have to use them as identities. Our loyalty system uses a loyalty ID as its primary ID, but it also uses the CRM ID as a secondary identity. This allows us to connect profiles between the CRM and loyalty systems and note that these identity fields are custom fields added through a custom field group. I’ve also created custom identity namespaces for them. Our offline purchases schema, which captures those in-store orders, also uses the loyalty ID as the primary.

Anonymous in-store purchases, which wouldn’t have a loyalty ID. They’re not helpful in the use case. In our use case, we use the in-store purchases to send or suppress messages. So if we can’t tie the in-store purchase back to a known user, we don’t need that data for our scenarios anyway. Our consent and test profile schemas also use the CRM idea as the primary.

Now let’s look at the web and mobile schema. Note that I’m not using any fields labeled as identities. Instead, I’m using the identity map, which is automatically part of every experience events schema. When you use identity map, you specify the identity namespace and whether or not the identity is primary. When you send the data to platform, I’ll explain this more. When we get to the data ingestion portion.

So by having a considerate data model with good identity selection, identity graphs can form.

Let’s take a look at one. I’ll go to the identities screen and search for one of my CRM IDs. I can turn on the visualizations so we can see how the loyalty ID, CRM, ID and device IDs called experience Cloud IDs or ECI IDs are all graph together for this user.

So we review the schemas and identities. Now let’s take a look at our data sets. Data sets are no big deal. They just take a few seconds to create and I use one for each schema mentioned earlier. And I do like to use separate data sets for my web and mobile data.

To build profiles, you need to take one minor step and enable both schemas and data sets for profile, which is done through these toggles. You need to have identity fields to enable profile and with a schema using the identity map, there will be an additional dialog you go through to confirm that you’ll be passing your identities that way.

Once you have schemas and data sets enabled for a profile and ingest data, you can see profiles in the interface. So let’s click through to the profile viewer where we can see that this profile contains attributes from CRM, loyalty, consent and event data from the website and mobile app. Now let’s pivot to data ingestion and we’ll start with the website and mobile app. So we’ve implemented with the Experience Platform, web and mobile SDK, which can send data to platform as well as other ÃÛ¶¹ÊÓÆµ applications and third parties. But this isn’t actually a requirement if you use app measurement for your web analytics data, you can use the ÃÛ¶¹ÊÓÆµ Analytics Source Connector to pull that data into platform. And if you use a non ÃÛ¶¹ÊÓÆµ analytics vendor on your website and mobile app, you can ingest that data into platform two using source connectors or APIs.

When someone logs into the Luma website, they become a known user and we can pass their authentic data or customer ID to ÃÛ¶¹ÊÓÆµ. If I inspect a network car, you can see I’m passing this Luma CRM ID as the primary identity in the hit. I use tags to implement the web SDK and there is a special identity map data element type that you can use to set these authenticated IDs.

There’s another identity, a device ID called the Experience Cloud ID or ECI ID, and that gets added to the identity map on the platform Edge network. Before that data is sent to the platform. So I won’t see it in the call, but it will be there when the data gets into platform and the idea is used as the primary identity. When the user is not logged in and the mobile app does the same thing, we also collect our events through the case. So the product for use adds to cart purchases. There are two common ways of doing this. First is to use the event type field. Note here I have event type set to commerce product views. I also have a commerce stop product used values set to one. That’s another way of signaling that a product view has occurred. I can only send one event type per call, but with that dot value approach, you can indicate that multiple events have occurred within a single call. It’s just important that your marketing team knows which approach you’re using in order to build their journeys and audiences correctly.

The SDK will automatically add timestamps and IDs to every event, so you don’t need to worry about those or other events coming from in-store purchases. Those can be streamed or batched into platform using the available source connectors in our catalog or by using the API. I use the API to ingest my sample data. This data should map to the same extreme fields as your web and mobile data so that you can easily identify purchases across all sales channels.

What about our messaging requirements? How do we collect the product details in case we want to display things like product names and images in our re-engagement messages? I’m collecting the product SKUs and names here in the Web SDK implementation. You can see them in this product list items array. The image URLs. I don’t collect client side. If we go back to my ERP. Note that I have a separate product catalog schema that uses a custom product catalog class. I use what’s called a schema relationship to map the SKUs from my events to this schema. So it’s basically a lookup table and all I really need to collect in my events is the SKU.

Now let’s move on to our customer attribute data. CRM data can be onboarded from our CRM source connectors available in the catalog or through cloud storage connectors or via API. I’m using the API to batch ingest sample data.

Consent preferences can also be onboarded from the consent and preferences source connectors in the catalog through cloud storage, connectors or API. Again, I’m just using the API to batch my sample data.

Loyalty system data can be onboarded using cloud storage connectors or API. Again, I’m using the API and the test profiles since this is more of an internal configuration for the marketer. These are usually just dragged into the UI on the data set screen, so that should cover data ingestion and our messaging requirements. And that concludes our implementation video showing how we built our schemas, chose and populated our identities, built data sets, ingested data and constructed real time customer profiles. As you can see, we took that design pattern from the use case document. Broke down the problem into the areas of identification, qualification and messaging requirements and used an entity relationship diagram to help focus on the use case. Make sure we were capturing everything we needed to accomplish the use case. I hope you’re are able to use this example to implement the use case at your business.

Journey Configuration

Transcript
Welcome! In this video you will learn how to implement the abandoned product browse scenario in ÃÛ¶¹ÊÓÆµ Journey Optimizer. I’m Sandro Ausmann, Senior Technical Marketing Engineer at ÃÛ¶¹ÊÓÆµ. This video is for the marketer. I will walk you through two implementation options on an example of a fictional retail brand called Luma. This scenario can also be applied to other industries like financial services or pharma. I will explain the reasoning behind each implementation step and what you will need to implement. But I will not go into the depth of how to set up each element of the journey. You should therefore know how to create a journey, messages, audiences and events. If you are unfamiliar with any of these concepts, please review the tutorial section on Experience League for detailed guidance before starting this tutorial. Now let me go through the technical prerequisites. You will require the following permissions. Managed Journeys and Published Journeys. Managed Journey Events, Data Sources and Actions. Managed Messages. Published Messages. Published Messages. Managed Segments. You will also need to have the communication channels you would like to use configured. In this example we are using Email, SMS and Push. Let’s review the business requirements for the abandoned browse scenario. In this scenario we aim to re-engage a customer who viewed a product on either the website or the mobile app and did not continue to interact with the brand. We will show the customer personalized ads as well as send them re-engagement messages via Email, SMS or Push. The re-engagement through the messaging channels will be implemented with Journey Optimizer. The destination framework is used for paid media ads which is covered in another video. The scenario is triggered when a product has been viewed. When there is no brand engagement in the following three days, we will send the customer a message with a call to action to re-engage. Brand interaction would be online or offline purchase, a product was added to the cart, a site visit or any interaction with the app. After another three days, should the customer still not engage, we will send them another reminder message. Now let’s get started. Before you can start building your journey you will need to create two events. The journey is triggered by a Product View event which means a profile enters a journey when they view a product. Journey Optimizer will then listen for an hour after the product view if this customer added any product to the cart. If he did, we will end the journey for this specific profile. Let me show you how the events are configured. The Product View is a unitary, rule-based event. The Luma schema we are using is the web and mobile event schema. We have added the following fields in the event payload. The Identifier, Event Type which we will use for the condition, the Identity Map fields which link this event to the profile and the timestamp. The other fields are specific to Luma and depend on your specific data structure and which information you want to have in your payload to use to personalize the messaging. The event condition needs to be defined as Type is Product Views. And the namespace is ECID. The second event we need is the Add to Cart event. It is also a unitary, rule-based event. Again, the fields depend on what you require in the payload and how your data structure is set up. For this use case, we don’t require a lot of information as we just need to know if this event was triggered or not. Identifier ID, Event Type and Identity Map will be sufficient. However, if you are planning to implement the abandoned cart use case as well, you should add all information you require to personalize the communication with a customer. Like the cart ID, the ID and value for the product list ads, as well as the required product catalog data. Such as the image URL, the SKU, product name, price and so on. Once the events are available, you can add them to your journey. You can just work with a Product View event, but in this retail scenario, listening to an Add to Cart for an hour after a Product View allows you to exit profiles who have a buying intention from your journey earlier. Without this event, all profiles will stay in the journey for at least 3 days. Since the events are followed by a 3 day wait before we check if the customer has been inactive since the Product View. So how do we check if the profile engaged with the brand? Remember, engagement with the brand means browsing the website or interacting with the app, adding a product to a cart or an online or offline purchase. So the online and offline purchase is important. Depending on your data structure, you have two options to implement this use case. If your data structure has all event information in one schema, you can simply create a brand engagement event. And instead of listening to the Add to Cart after the Product View for an hour, you can listen for any brand engagement for 3 days. In that case, we won’t require a wait activity and the profile will exit the journey as soon as they engage with the brand. But if you have the event data in different schemas, one for each data source for example like Luma does, then you cannot use an event to listen for overall brand engagement as each event can only link to one schema and Journey Optimizer can only listen to one event at a time per journey. So if you have your online and offline purchases in two different schemas, we will need to work with audiences and a condition that checks if the profile is part of an audience. If they are, they will exit the journey. So we need a batch audience that a profile qualifies for if they view the product in the last 3 days and engage with a brand after the Product View. However, ÃÛ¶¹ÊÓÆµ Experience Platform batch audiences are only calculated once a day, which for this scenario potentially means that profiles can be missed. So let’s take a look at the timeline to understand the implications. Our timeline starts when the profile views a product and enters the journey. After listening for an add to cart for an hour, we have a wait for 3 days before we check if the profile is part of the engaged audience or not. Since the audience batch generation only runs once a day, the first audience that will be relevant for us will be generated sometime within 24 hours after the Product View. So let’s assume this is when the audience is calculated. So a profile will qualify for the audience if they viewed a product in the last 3 days and after that engaged with a brand. So you can see the yellow block shows the timeframe when in our journey the profile can qualify for the audience. Now let’s take a look at the situation at the time of the condition check. As you can see here, the third batch generation run after the Product View is the relevant one for our condition check. You can also see that the audience will include all profiles that viewed a product, so entered our journey, and then had brand engagement since the initial Product View. So far so good. However, due to the timing of the batch run, we’re going to have a blind spot exactly between the last audience calculation and the condition check. This blind spot means that any profile that was engaged with a brand during this time will not be added to the audience, which ultimately means that they remain in the journey and we will be targeting them, and of course we don’t want to do that. Depending on when within the 24 hours after the Product View the first audience calculation runs, the timeframe of this blind spot can be longer or shorter, but it will never exceed 24 hours. So, to bridge the gap, we will define a second audience, which looks for profile engagement within the last 24 hours. This audience can be of type streaming or will be of type streaming, so a profile gets added in real time. Unfortunately, we do have yet another issue. Because we added an hour wait if our audience calculation runs within an hour of the condition check, then you can see here all profiles in the journey will disqualify from the batch, because the first rule of our batch audience, the Product View, falls out of the lookback timeframe. So, to solve for this issue, we simply expand the lookback period from 3 days to 4. That way we have a 24 hour buffer again and we are on the safe side and do not miss any profiles. In our first condition we will therefore check if the profile has not qualified for the two audiences. If it has, then the profile will exit the journey. If they were not engaged, we check what their preferred channel is and if we have consent to contact them via this channel. Now let’s talk about personalization of the messages. Let me navigate into the email. Now, if you would like to refer to the product the customer reviewed in your communication, you have two options. You can personalize the message either by using the contextual attributes coming from the Product View event. In this case it is the first product the user viewed, the product view that triggered the journey. If you expect that the customer might have viewed other products after they entered the journey and you would like to personalize on the last product viewed and not the first product viewed, then you can work with computed attributes. These will need to be set up before you can personalize your message. Please see the product documentation for a detailed description on how to set up computed attributes. The rest of the journey is straightforward. Journey Optimizer waits for 3 days after sending the message and then again checks the audience membership. In this case we only need to check for engagement in the last 3 days, so we require a third audience. This one also needs to be a batch audience. And to cover the potential blind spot we have due to the timing of the batch run, we also need to check for membership to the streaming audience of engagement in the last 24 hours. If the profile is not a member of either audience, the customer is not engaged with the brand since the last message was sent, so we will check the channel preferences and the consent once again and then send a second re-engagement message and end the journey. If your data structure allowed you to work with a brand engagement instead of the condition check on the audiences, then of course you again listen to the brand engagement event and set a timeout for 3 days and do not require the wait. Now you should know how to implement the Abandoned Product Browse Use Case in ÃÛ¶¹ÊÓÆµ Journey Optimizer for your brand. Thank you for watching!

Audience and Destination Configuration

Transcript
Hi, it’s Daniel. In this video, I’m going to show you how we set up the activation of our paid media campaign for the Intelligent Re-Engagement Abandoned Browse scenario. We set up this scenario on our retail demo brand, Luma. First, let’s review this scenario diagram. Eligibility for the paid media campaign begins when a customer views a product on either our website or mobile app. After the product view, we want to give the customer a little time. Maybe they’re going to make a purchase without a nudge. Maybe it’s a purchase decision, which takes a few days, and they’re still actively engaged with your brand. We don’t want to waste advertising budget on active customers. So we’ll wait for three days, and if they haven’t engaged with our brand again, we’ll enter them into our paid media campaign. We’ll also send them a message from our ÃÛ¶¹ÊÓÆµ Journey Optimizer Journey, which is covered in a separate video. So we show them the ad for three days, and then that’s it. If they engage or not, we want our paid media campaign to end. To execute this, we need an audience to detect the lack of brand engagement and a destination to which we’re going to send this audience. Let’s start with the audience. I’m going to show you the end result and then walk you through how we got there. We ultimately built three audiences, two which look for specific behaviors and a third which combines them. Let’s dive in. We started by tackling this portion of the diagram, looking for a product view, and then excluding people who engaged with the brand. Engagement with the brand being defined as anyone who bought something or even just came back to the mobile app or website. This is what our final definition looks like. If you’re new to building audiences, this plain language description is incredibly helpful to understand how the audience behaves. Now I’ll show you how we actually built the audience. I’ll start by dragging the product view event onto the canvas. Next, we add our exclusions, excluding people who made a purchase, excluding people who launched the mobile app, and finally, excluding anybody who visited the website. I’ll explain why I didn’t just use the page views event in a minute. Also, very important note that I use the and condition in the exclusion to make sure none of these events occur. When choosing how to define these events in the audience builder, we need to know a little bit about how events are collected in the source systems gathering this data. And this might be different for your implementation. We pass event type commerce.product views on product pages on our website and mobile app, which is why I was able to drag that product view event into the audience builder. Those easy to grab events are dependent on the use of event type. In our web SDK implementation, we don’t pass event type web.webpage details.page views on every page load. On this product page, remember I’m passing the commerce.product views and there’s a limit of one event type per call. We do pass web.webpage details.page views.value equals one on every page, which is why I use that in my audience definition. OK, let’s resume building our audience. Now, looking at our diagram again, we see there are some time considerations. We don’t want the visitor to qualify for this audience unless they’ve been disengaged for at least three days after the product view. Also, after six days, we want them to fall out of the audience and no longer be in the paid media campaign. There are a lot of places to add time constraints in the audience builder, and they each have a different impact. There’s one here, here and under here. For this use case, I added to the product view event. I add a rolling range of three to six days ago. The description down here is your friend to see in plain language if the time constraint makes sense. Now, one thing we realized is that we don’t want to be too strict about banning brand engagement immediately after the first product view. For example, if someone looked at one product and then right afterwards looked at some other products or went to the home page, really went anywhere else in the website or mobile app, we wouldn’t want to kick those types of people out. So this is where our other time consideration comes in. We only want to look for the absence of brand engagement beginning an hour after that initial product view, which gives a little buffer for the visitor to complete that visit or session. We add that buffer here before the exclusion. So that’s our first audience. We save it as a batch audience, which is entirely suitable because of the time horizon. Now, because of that one hour buffer, we need to be careful. We just created a blind spot and we don’t want people who make a purchase or add a product to their cart during that hour to qualify for our paid media campaign. We don’t want to spend ad dollars on people who just bought something from us and people who added something to their cart but didn’t purchase. We’d prefer to save them for our abandoned cart scenario. So we built out our next audience to look for product views in that same window of three to six days ago and then do not purchase or add anything to their cart within one hour. At first, we tried to build this logic into our original audience, but there is a restriction on sequencing exclusions. Now we can build out the third audience, which just looks for anyone who qualified for both of the other two audiences. Now that we have our audience, we can activate it to our advertising destination. The configuration is going to be different based on what destination you use. All advertising destinations require you to use something as an identifier. For people who’ve never logged in to the website, a destination like Google DV360 might be useful. To do this successfully, you’d need to implement a sync container on your website and mobile app to synchronize identifiers. For people who have authenticated, you can use other destinations like Google Customer Match and then use a hashed email, phone number or mobile device ID as the identifier. If you haven’t yet configured a destination, here’s a quick overview of the process. You find the destination you want to use in the catalog and configure the destination. Now, many destinations require more authentication and account details than this one. Once the destination is configured, you can add audiences to it. Advertising destinations only share audience qualification. They don’t share profile attributes. So we can’t share the details of the last product the customer viewed. Now, I have a bunch of test scenarios I use to validate the behavior of this audience. But what you should see when you test on your own website and mobile app is immediately after browsing the website or mobile app, you should see the product view event captured in your profile. On the third or fourth day after the product view, you should see that you qualify for the audience and you should continue to qualify for the next few days. Remember, these are all batch audiences and will evaluate once a day. You can view the timestamp of the last audience evaluation by opening up the audience in edit mode. So if you viewed a product immediately after the last evaluation, it might take a little bit longer for you to qualify for the audience than somebody who viewed a product a couple hours before the audience evaluated. The audience qualification leads to the person’s qualification status being shared to the configured destination. And assuming you’ve configured your paid media campaign, they will qualify for that. After six or seven days, the person should fall out of the audiences and then cease seeing the ad campaign. That’s it. I hope this helps you to implement the abandoned browse scenario at your company.
recommendation-more-help
6f08adb8-9d4a-4cec-8173-ab4c03ac2841