New Relic Essentials for AEM Cloud: Optimize Performance and Prevent Issues
In this session, learn how to maximize your New Relic entitlements to monitor and optimize AEM Cloud performance, efficiently diagnose issues and enhance your AEM environments. Explore real-world scenarios and best practices to gain actionable insights for superior performance management.
Key Discussion Points
- Learn to efficiently navigate the New Relic platform and dashboards for monitoring AEM Cloud performance
- Discover key metrics for identifying and troubleshooting performance bottlenecks in AEM environments
- Explore best practices and strategies to optimize AEM performance using insights from New Relic
Hey.
Hello, everyone. Thank you so much for joining us today.
We will be getting started in a couple of minutes.
Just wait.
We want folks to drop in.
We wait until we hit the 30.
Fixing those.
So hello again, everybody. Thank you so much for joining us today.
Today’s session is all about EuroLik Essentials for AMS and Cloud Service. And it will be led by Nitish Kumar.
So Nitish, let’s jump to the first slide for just an overview while we wait for a few more people to join.
There are three remaining sessions for this current quarter.
You can see on the bottom left.
I’ll drop the links into the chat. So feel free to register for any of those. Please be aware that the no-stop planning on the left is right after this session.
And also, if you haven’t heard about, if you haven’t had a chance, I highly recommend to checking out for our previous sessions. There are over 60 recorded webinars available on Experience Leak. And they are packed with insights, best practices that you can explore on your own pace. And finally, we’re just about to complete our topics for the next quarter Q3. So stay tuned and be sure you check in with your CSM or your time about the dates for the upcoming sessions.
Before we jump into today’s topic, I want to quickly mention our ultimate success accelerators. These are short, precise engagements designed to help you plan smarter, enable your teams, and also execute more effectively. We focus on four key areas, which are technical readiness, organizational readiness, adoption and enablement, and the latest one, Gen Studio for performance marketing activation. These are part of your ultimate success plan and can be scheduled with your CSM or the TAM.
Those are a great way to accelerate value and get out of the ÃÛ¶¹ÊÓÆµ solutions.
Next slide, please.
As we fill the room, welcome again. And thank you for joining our today’s session focused on EuroLik Essentials for AMS, the cloud service. My name is Bjorn Schott. I’m a manager in the field engineering team. Our mission is to help ultimate success customers to get the most value out of the ÃÛ¶¹ÊÓÆµ solutions. As part of that mission, my team, including Nitesh, deliver success accelerators tailored specifically to our ultimate success customers. On a quick note, this session is being recorded, and we’ll be sending the recording link to all who registered to this webinar. This is the listen only webinar, but we absolutely encourage you to share any questions in the Q&A pod or in the chat.
We will be responding to your questions, and also we have set aside time in the end to answer questions live.
And now I’d like to hand over to Nitesh to introduce himself and take us into today’s topic. Nitesh, the floor is yours.
Thanks, Bjorn.
Hi, everyone. My name is Nitesh Kumar, and I’m a senior consultant at ÃÛ¶¹ÊÓÆµ. I’m part of ultimate success team like Bjorn, where I work as a multi-solution architect, but I primarily focus more on AEM. In my role, I work on launch advisories, field services, which are success accelerator. That involves reviewing the solution, also troubleshooting some of the issues. Now, during those engagements, I had the opportunity to look into the neural net quite a bit. And over the time, I got some experience with it. And then it also motivated me to go for neural certification to gain a deeper understanding into the tool.
So in today’s session, I’ll be sharing those learnings with you. We’ll start with the basic of neural net, and then we’ll explore how it’s used in real world AI and cloud project. Could be for multiple ways how you could leverage it. So let’s quickly take a look into the agenda.
We will start with the high level introduction of Neuralink as a tool, like the generic overview. We will see what’s included in Neuralink that is bundled with AEM as a cloud. We will see how we access and navigate into Neuralink One account. We’ll also see the key features and the capabilities that you can leverage within the entitlement.
I have prepared two demo. One would be like how you can use it for your operational monitoring.
One of the other demo that I have is coming from one of the experience I had working on solution troubleshooting. So I tried to recreate the issue, and then we’ll see how Neuralink can be leveraged as a tool to troubleshoot those issues.
And then we’ll look into the limitations and certain best practices which you can use in your project.
Lastly, we will leave the floor for Q&A, but don’t wait for till the end. You can always write your question in the chat and then we will take it towards the end.
So Neuralink is a full stack observability platform, which lets you monitor your application, your infrastructure, or if you have connected your databases as well, and everything is in real time. It brings together all the metrics or events or logs or traces. It’s also called Melt to give you a complete view of your system. You can deep dive into any aspect of your application. That could be checking the traces or doing the other analytics. You also gain insight into the servers, for instance, especially in terms of cloud. You can also look into the containers, how they are performing. If there are any database connected, then you can also monitor how they are performing and you can use those metrics to improve those queries, for instance.
One of my personal favorites is custom dashboard, which helps you to visualize your KPI that you decide with the help of the Neuralink query language which is very similar to SQL. If you’re familiar with SQL as a language, then it’s quite easy to get started with.
Now, Neuralink comes bundled with AM as a cloud service. So it gives you a built-in observability without doing any additional setup at your end. Now, one thing to notice is it’s for the backend, so it doesn’t have the front-end integration, which is also possible when you have a Neuralink-centric account. It provides you visibility into both application performance. You can gain insights into infrastructure, like how individual parts are performing. You can look into the JVM threads.
It’s customized for AM as a cloud. That means you can see the JCR transactions, also the workflows. If you have any custom workflows, you can look into the author and then you can see how those workflows are performing in real time.
We also exposed a few JMX AMBINs that are visible also into the preconfigured dashboards that comes in your account. So you can use those JMX beans to gain some insights into specifics, like if you want to look into the queries, slower queries. Those are the things that are available within the Neuralink account for you.
As a part of this offering, customers are entitled to one Neuralink account per program.
This account is pre-instrumented, so you can track things like JCR transactions, workflows, and other KPIs that comes out of the box. It doesn’t support any alerting or logging or no API integration that you can do.
It is available for production and non-production, so if you have certain issues that comes on production and if you want to recreate it on non-production and want to monitor it, you can do this using the Neuralink account.
This is applicable for both AM as a cloud service and managed services customer assessment. So even if you have managed services of AM, you still get Neuralink account or entitlement for you.
You will access Neuralink directly through Cloud Manager. Most of us, I’m assuming, are familiar with Cloud Manager. It is the starting point for AM. You can access your environment. You can go deeper into each individual environment. You can access even developer console through it. So same way, if you have to access Neuralink, you will land on this page, and through this page, you can click on the link which will take you to the Neuralink. Typically, when you have new account, you will see an Activate button which you can click, and that will essentially activate the Neuralink account for you. It is all self-serve, so you don’t have to reach out to ÃÛ¶¹ÊÓÆµ for this.
One thing to notice here is that first time when you activate the account, you need to run the pipeline for that specific environment so that those entries can make it into the Neuralink. This is also true for managing the users. You can manage the users from here. So if you have to add new users or remove certain users from it, you can use it from this window.
One more thing to notice is the login to Neuralink doesn’t go through IMS, so you have to create your account using the email address and the password. So you will always log in using those username or email address and password.
You won’t see Certain button enabled if you do not have these roles. Like you should be in Deployment Manager or Business Users. So that’s a prerequisite for you to, if you want to access or manage the users through Cloud Manager.
Now it offers quite some friendly interfaces. Like you have the APM, which stands for Application Performance Monitoring, which essentially gives you the insights into the request, each request which is being executed within AEM. It could be certain pages or it could be your servlets. It could be request to resources like CSS or JS files, or your servlets that are executed. It will list all those transactions into the APM window.
It also lists down all the JCR queries, be it your out of the box queries that are being executed, or if in your own custom code, if you are accessing, if you wrote some queries that are executing, you will also see those queries popping up in this window.
When you get an account of Neuralink, you do get some pre-built dashboard, which we’ll also see later. You can start with those dashboard, but it’s not only limited to that. You always have a possibility to create a new dashboard. However, like I said, if you have to do that, you need to know Neuralink query language. You can also leverage the existing dashboard. You can just copy and then modify the existing and then create your own dashboard.
A very important aspect is JVM, which is often ignored. So you do get insight into the JVM aspect as well. So if you go to this window or the screen, you will see the memory consumption, for instance. You will see the CPU consumption. You will also see the thread count, garbage collection time. So this gives you an indication of how your application is performing.
Also helps you to identify if there are any performance areas, which could eventually help you to improve over the time.
Like I mentioned, there are certain preset views and dashboards, which can be explored, and then you can build on top of that.
There are certain valuable use cases in context of being as a cloud service. After working on the solution reviews and troubleshooting, I try to pick certain use cases that might be beneficial for you. The most used, I would say, is transaction monitoring, which essentially helps you to understand how the requests are performing.
You can see the external dependencies. You can see the database performance. So in a way, if you want to trace those requests or deep dive into those requests, that’s helpful. The other use case could be if you want to optimize the performance. Now, this could be because of some new code being added, or this could be because you recently performed any load testing, and then you want to identify those areas which needs improvement. So this could help you with identifying those transaction, and potentially also detecting those early memory leaks, which can help you to build a more optimized solution.
Since it is also instrumented for cloud, so you can gain insights into a workflow. So if you have any custom workflow that might be taking, for instance, too much time, or if you think there is a pattern when those workflows are running, then you are seeing certain issues on author or publishers. You can make use of neural to detect those.
Same goes for your JCR instrumentation. Most of the time we see those JCR queries you can see it here. For me, the starting point is always neural, but if you want to also see how those queries are performing, and gain a deeper understanding, you can go to Developer Console, and then within the AM, you also have a feature of checking the query performance. So you can see how the query is performing, what it is scoring against the indexes. So those will be a follow-up on top of this.
Now, in my experience, these are top insights which are very common that I have seen.
So the first one is tracing those transactions. So you can, at any point of time, can go and log in into New Relic. You will be able to see the request. Like I mentioned, it’s not only for HTML pages, but if you have certain servlets, those are also listed, so you know which of those requests are taking too much time for you. Since it provides you the component-level breakdown, so you can literally break it down and see where the time is being spent. Also Java methods, also the classes. It gives you an indication of the time, like how much time it’s taking to process it at the backend. Since I mentioned it’s not for the frontend, because most of the time you will see the request is being cast, so you will have to monitor the request that’s making it to the publisher.
The other important aspect would be the JV monitoring that helps you to identify the patterns around the memory and CPU usage that can give you an indication of if the issue is with the page, or at a certain time, for instance, if you’re running certain jobs that might be causing it to spike. So you can clearly identify those patterns.
It’s also important to look into the thread pools. So if you’re making too many calls or opening too many threads, that you can see that at any given point of time, what was the thread pool utilization.
It also gives you indication, like how many classes at a certain point are loaded into the memory.
One of the important aspect is if you notice that a lot of time is being spent into the garbage collection. So that’s clearly an indication that some of your, or the most part of the CPU is being used for those garbage collections. So there is potentially a chance to improve or optimize the code. And most of the time, these are clearly early indicators for any memory leak, or it could be a starting point for you to do some performance optimization in your code base.
Then we have the dashboard. So you do get pre-built dashboard for your account. Like in the screenshot, as you see, you get it for prod, stage, specifically dashboard for index, how it is performing. Then you get it for sites as well. It’s not like full cut off the screenshot, but when you go to the New Relic, you will see more dashboard there. And going into these dashboards, you will see lots of widgets, which are there, like slower queries or slower transactions.
Everything that you see in the widget, that’s like New Relic, that’s built out of New Relic query that we have talked about. And you can essentially copy those queries. And then if you want to build your own dashboard, you can create a widget and then add those queries there. And then you will have to widget in your own dashboard.
And like I mentioned, you need to know a bit of a New Relic query language, but from my experience, it’s not that difficult. It’s not like SQL. So it’s very much like SQL. It’s just about knowing the metrics or what part or some syntax which you need to be aware of. And then you can build these custom dashboard by yourself.
Now that we talked a bit about theory, I would like to go into the demo part of it. So like I mentioned, I have created two use cases. First use case is when I do a solution review, I always start with this monitoring. For me, usually the window is seven days or 30 days, depending upon how long the application have been running.
And the other use case is coming out of those troubleshooting. So I try to recreate those use cases. We’ll see like when your page is performing slow, it could be a sublet as well in your case, then how can you identify the bottlenecks and how you can troubleshoot it. So let’s first go into the operational monitoring.
So for the teams that are managing AMCloud environment, it is highly recommended that you use New Relic for daily monitoring, because it helps you to perform those checks visually. If you have defined those KPIs, you can easily convert those to the widgets or dashboard. So you can visually see how your system is performing.
It can clearly help you with any performance degradation. So there are cases when you do a build and then you start seeing the degradation in your system. New Relic offer you a window through which you can compare this week with the last week, or you can compare it with today with yesterday, depending upon when the build was performed on your system. You can use it to monitor the throughput and the response time of your request. So that gives you a indication on, maybe it helps you to identify certain parts or areas of your application that might be taking some time.
It also helps you with observing some system resources like CPU, memory, and garbage collection activity. And it can also help you with spotting those anomalies, not only across publisher, but also across the author, because in New Relic, you can monitor both of these system.
Now I’ll take you to the window where I can show you how the screens within the New Relic look like and how we can navigate those. So let me go back to New Relic.
So first I will go to the cloud manager that I talked about. So here we have all the environment listed. And if you click on this three dots, that’s where you find the button for activating the New Relic. And also if you want to add or remove the user, you can click on this manage user and then subsequently add or remove them. You can also navigate to New Relic from here. Like when you click on this, you will land on this page.
So when you land in New Relic, you will see all the environment listed, all the stage production and your dev environment as well. It has publisher and author as well. In my case, I would be, since this is my sandbox, so it could be less than what you see at your end. But for my use case, I would go to publisher or dev. That’s where all the transaction, all the requests are there. So I’ll be using this environment.
When you click on one of these servers, you will land on a summary page.
And there you can see we have certain filters which you can check like transaction type. So in this case, you can filter out if you want to check the web request or non-web request. If you are in author environment, you might see a workflow as well. Since this is publisher, so you don’t have it here. You can compare it with yesterday, how it was behaving yesterday. And also last week, the use case could be if you want to test the performance after your last build, or if you made certain modification, or if you want to test, since there was some integration with the backend, if you want to compare, you can use this filter as well. You have a possibility of changing the duration here. So if you want to check for past three hours, three days, seven days, or three months, the maximum you can go back is three months. So we have those data available for three months, which you can check.
Then you can check the instances. In my case, as you can see this time, only two parts are active, but this could be more in your case. You can all check the overview, or you can also go into the individual parts to see how they are behaving.
You can look into the web transaction time from here. You can look into the APTEX score. So ideally the score should be one. So the one is better, the less you go, that’s not good for your system. That means that users are experiencing some issue with the page load or some experiences.
You will be able to see throughputs. How is the throughput rate at a given point of time? You will see some web errors popping up here. Like I said, it’s a sandbox, so you might not see it here. Let’s see if I change the window, if I see something.
So here you can see some four-cross or five-cross error, which is clearly a starting point, usually for me as well to identify those errors. And then I would make use of Splunk query to also validate this data.
It also lets down the slowest transaction. So you see all those slow requests that are coming to your instance.
Then if you have to go deeper into specific requests, you can always go to transactions. That’s where you will see those requests listed here. And for each request, you can see like how, like in this case, my filter is most time consuming, but you also have other filters which you can use. For transaction, as you can see for web part, we have web and also for if you want to focus only on the Java part, those classes will be listed here.
You can look into the time consumed by those web transactions here. You can look into the throughput, the CPU usage pattern and memory usage pattern for each pod during that time. As you know, like in the auto-scaling environment, there at any given point in time, the environment or pods are being added and also being removed. So you can see at the time, like how those pods were performing.
And if you scroll a bit towards the bottom, you will see those requests. And along with that, you will see like how much time they are taking to process from backend.
If you have those JCR queries added to your code, you will see all those database operation listed here. In this case, it’s all out of the box. But if you have your code, which is using your own queries, those will be listed here. So you can see how much time they’re consuming. You will see the query time, the execution time, the throughput at that particular time, and also the traces. You don’t see it here because I don’t have those, any custom query, but most of the time, your queries, those slower query will be listed here.
Then you can also navigate to JVM to gain a deeper understanding into the system utilization. So if you go to JVM, and if you click on any of these servers, you will land here and you see all the environment listed here or the pods listed here. You can check each of those pods, or you can just click on all instance, and then it can give you the average view of the environment. You can see the response time, throughput, memory usage, also the garbage collection CPU time.
And if you also switch to threads here, you will be able to see thread count, what was the thread count, the state of the thread at that specific time. So these are very helpful when you are looking into any performance aspect of your application, or if you have to add new logic to your code, and you want to see how the existing one is working. So I would highly recommend that you see this before you add it, because there is auto scaling on cloud, but yet we have certain limitation to everything. So this will make you more aware of the pods or the environment on how they’re behaving and how you can improve it, and things to consider when you’re writing new code.
That was pretty much on the operational part. Now I will switch to the solution troubleshooting.
So let me share a story here. We had one of the engagement where this came as a solution troubleshooting last year, and I was approached by manager, by manager Biyon who is also in the call.
The issue was there was complaint from the end users about the slowness on specific part of the application, which was essentially leading to user experience and also certain entries that were popping on the analytic side. So the symptoms that we observed was increased bounce rate in the analytics.
And then when we checked the high time to first byte, that was clearly some indication that the issue could be at the backend because the pages were considerably slower when they were being loaded. And the issue was only to the specific pages. That means it was not generic. And since it was a cloud environment, so it was a bit difficult to do the root cause analysis. We also tried the local, but local had certain configuration that was not matching the cloud.
That made it a little bit difficult to isolate because it could be potentially a code issue. It could be an infrastructure issue, or it could be a third party issue, or it could be some product issue.
So the typical setup of any cloud or any web application these days you see is you have a request coming from the browser. It was a CDN. You have a second layer of caching red dispatcher. You have a publisher, which sometimes can also make call to the backend service. And this was the more or less the architecture. I tried to recreate it. It’s not like actual page or the actual data that I’m gonna show you, but I tried to recreate the same architecture or the setup on my sandbox just to show how this can also help you in troubleshooting those cases.
Let’s go back to the environment.
And first I will show you the page.
Let me get rid of this window here.
Let me append a query string here so that we see the page load.
As you can clearly notice that when we are loading this page, it’s taking too much time.
Now this could be the server because if server are not responding, then you could have the slow load of the page in the browser or it could be some logic which might be causing it to block at the backend so that it’s waiting for any processing at the backend. And then it gives you output at the browser end. But before we look into the time, yes, so it took like 33 seconds here. And after this, you would usually start with New Relic. And since it is near real time, so if you go to your New Relic account and I would go to transactions.
Let’s see if it refreshed.
I will just reload it.
Let me increase the window so that I can see the previous request.
Okay. So if I have to look into this specific request, you can go to transaction. And as I said, all the requests will be listed here. Now for me, clearly, these are the requests that’s taking too much time, 15 seconds, 30 seconds. So if I have to check where the time is being spent, I would just click on one of these requests here. And then it gives you a window where you will see the traces of the request along with these high level breakdown of where the time is being spent. Along with that, it also captures some of the request attribute of the request that made to New Relic.
If you have any JCR queries that were part of your component logic or the page logic, you will also see that appearing here. In our case, we can always start with the top. So you see this is the request here. And if we have to check the trace, you will start from the top.
If you keep scrolling, you will see the breakdown of the logic. Like here we had at the page component 11. And then if you scroll further, you see the call is being made here from the HTTP end. And this is taking like 31 seconds. This is a web service I hosted on another platform where I blocked the processing just to showcase how much time it’s taking.
This was the similar pattern which I saw in that troubleshooting as well. Now the recommendation that were provided was you can always add a caching at the server from where you are getting the response and also set the timeout in your code without waiting for 30 seconds or 60 seconds or 180 seconds for it to complete. Because I have seen certain cases where it was more than one minute or two minute as well, which is not a good practice. So you should always set the timeout at the code to break down and then fall back to your default or fall back to certain other logic. Now this will essentially can help you with other requests as well. This is for the page, but if you have any sublet and those requests pops up being slow, you can go to those requests when you open this trace and then you can drill down into each component to see where the problem is. So this was the troubleshooting case which I wanted to talk about. Let’s go back to the slides.
So a neural is a powerful tool. It helps you with a lot of things, but it’s not a magic ball that can look into the future. There are certain limitations as well. Like for instance, you’re restricted to the predefined agents. You cannot do any custom integration on top of it. The data is retained for three months. So if you go into the look back window within the tool, you will see it allows you for three months, but beyond that, at the moment, it doesn’t support alerting, logging or any sort of integration from your site. Could be something added in the future because as your application evolves, cloud services also evolving. So there could be things in the future, but as of now, you cannot do all those customization.
There is no instrumentation is possible. You cannot do any of the engineering from your site. All these instrumentation are being done by the engineering from ÃÛ¶¹ÊÓÆµ.
You cannot add those additional neural product, especially if you, because you will see all the wind, all the screens, all the window or the component of neural link. But if you try to access those, you will not be able to access or you will not be able to see anything because those are not being instrumented.
I try to collect some best practices, like the generic best practices and also the some of my learnings.
One thing I can already highly recommend is that you should regularly review the monitoring data to identify those areas of optimizing because whenever I do solution review, the first thing I look is into the neural link to see how the application is performing, seven days, 32 days, you can pick any time that you want and then that’s where you started. It also helps you to see or identify if there are any issues that has been introduced. You can familiarize your team with neural interface. This was also a key learning for me as well. When I started with neural link, I had to also go deeper into the neural link to also understand its feature.
Then you can make use of neural documentation for advanced learning and this can help you with troubleshooting, especially when it comes to JVM memory on transaction or some GCR queries, these are the things which are tied up to our application performance. So you should monitor this within the neural link. If there are any external service called, you do see it listed in a separate dashboard, but at the same time, you can also trace those into the request detail that we just saw.
You have certain dashboard that are pre-built for you, so make use of those dashboard. At the same time, as your application evolves, you may have certain additional KPIs. As and when they evolve, add those KPIs to build new dashboard or new widget, which you can do in the neuralic offering that you get with cloud.
So here are some additional resources. One could always be the AM documentation that we have for neuralic. So if there are new things which are being added to neuralic entitlement, you will always find it there. If you want you or your team to know more about the neuralic, I can recommend the courses which are there. I think most of them are free, so you can gain your knowledge, additional knowledge into the neuralic part from there. We have a strong community of AM on Experience League, so if you have any issues, you can always reach out to the community. We have many, many community member there. Also, people from ÃÛ¶¹ÊÓÆµ, they are active, so you can always find your answer there.
And then if you have to know about specific metrics or events or specific term from neuralic, I can encourage you to go through the documentation of neuralic to know about those things. So that will make the usage of this tool more effective.
Now I leave the floor for Q&A. I don’t know if we already have some questions.
We can take those.
Thank you, Nitesh. There are a few questions, but before we start with the questions, I’d like to start a quick poll on this. Sessions to hear about your feedback, and then I’ll guide you through the questions, Nitesh.
So there was in the very beginning a question, so I’d also try to answer some of them.
So the first one was about alerting.
If a customer needs alerting or notifications, can they embed and integrate their Dynatrace application monitoring and configure alerts using their Dynatrace? So I think the question is, can a customer have two APMs at the same time? We do not have any integration from neuralic for this kind of monitoring.
You can also not create any alerting from the neuralic account. You do get some notification from ÃÛ¶¹ÊÓÆµ, and that’s a different way of notification. In terms of Dynatrace, we only support, as far as I know, is the log forwarding, which is there. You can do it through Cloud Manager. But when it comes to integration from neuralic, there is no integration or API access possible at the moment. This can change in future, but that’s the current state as of today.
Wonderful.
Then there was a question, do you have some custom dashboards created that are applicable for AMS and Cloud Service clients? I shared the full list as we have just the reduced list.
Just to probably answer that, is there anything where you always come across when you talk to customers about dashboards, which is often asked for, just to have some ideas? You mean available dashboard that we have on your line? Dashboards you’ve created during engagements? So I always, first of all, I think you need to identify what kind of dashboard you want to create. And then you need to look into those JMX beans. Most of the time they are available there, but sometimes they are not, but you can always reach out to ultimate success, right? To know, because I don’t think those things are public in nature yet. But for me, those are very edge cases where I have to look into those details. But what you can essentially do is make use of the existing dashboard. You usually get four to five dashboard, which is there focusing on sites, assets, index or specific queries. You can copy those queries and then you can build your own dashboard. That’s my usual process. I haven’t come across a case where I had to start from scratch yet, but I guess that’s possible.
Thank you. Then there were another question about email alerting, where we, I think, covered that this is actually not part of it. Feel free to put that in the poll as we have a question asking for missing features.
Then there’s more a usage question and I recall we had the same question. So I tried to answer with a link to the documentation.
So the ask is there’s an organization which uses a single sign on with multiple accounts.
And this single sign on always redirects so that the customers placed in his neural link and not in the ÃÛ¶¹ÊÓÆµ one. So in his own org.
If I recall correct, there’s top right, a selector for the photo organization, sort out where you can pick the right organization. Is that right? Oh, I got in my memory.
You have multiple accounts. If I get the question right, you already have a neural link account, but you are using it, then you want to also use this account and then should be able to see both, ideally both. I guess the only restriction I know as of now is that the login mechanism has to be same because we do not support, or at least the account that you get with cloud service that doesn’t support SSO. So meaning you have to create it as a username and a password. And since your existing account uses SSO, which might be the case, why you’re not seeing it synced. In this case, I can suggest you to reach out to support if you haven’t done that already. They might figure it out from the backend because this was the only requirement that I remember. You have, if you want to use your existing account with the one that you get, and I can only see that case. The only SSO that I know of is for us, we do it internally. That’s how we can access neural link, but for the customers, you have to create account and password. Maybe that’s the reason you’re not able to see that way.
Okay.
Then there was a question about exporting in any format, which has been answered in the chat. So there’s top right, a export function into JSON. So one can get the data in a JSON format.
Then there was a question about exporting, importing dashboards. I responded that at least you can copy the queries in the an RQL language and have them in a different account set up.
I hope that was correct.
There is this question.
There was a question about account access. So therefore I responded to reach out to support.
We covered that.
Customers is also not supported.
There’s the instruction for, oh my God, there’s a lot of stuff going on in the chat.
Access to New Relic.
Is the New Relic retention policy 30 days for AM as a cloud service as well? It’s 90 days. It’s not 30 days. The 30 days restriction we have is for account deactivation.
So if there is no activity on your account, on your environment for 30 days, your account, New Relic account might get deactivated. Then you need to follow the same step like you do for activation. You just activate and run the pipeline and then the data starts coming up. For the APM data retention, it’s 90 days.
And then we have one more question in the Q&A. Is there a plan in a certain point to have locks in New Relic and have options for querying them instead of clients to find a solution with another tool like Splunk Elastic Search for lock forwarding? I’m afraid I will have to take that up with me but probably you’re not the first one to ask that. I do see those requests when I work on those success accelerator.
I don’t think there is any plan as of now because the focus that is there is about the lock forwarding from the cloud manager to different tools that’s available.
It started with Splunk, but now we have also towards the cloud storage, if I’m not mistaken, and also the Dynatrace. But we don’t know yet if it will be possible, but we can check and then let you know.
I can already recommend you that you look into that documentation page as well because as and when those features are updated, it gets listed there. Also the release notes of cloud service and cloud manager because that’s where this information comes up as well.
If you are already ultimate success customer, you can always reach out to Tan to check if there is something in the pipeline. If something is in the beta, maybe they can onboard you there.
As far as I know, it’s not there, but maybe if something will be in feature, you will see it.
There came a new question. Can the Fastly CDN data also be seen in New Relic? No. Is that also kept for 90 days? No, you don’t see the CDN data there yet. We do provide CDN logs there and there is a open source tool. I don’t know if you know about this. It’s called ELK Analyzer tool. I think it’s developed by ÃÛ¶¹ÊÓÆµ folks as well. You can download that tool. It’s essentially based out of Elastic.
I am not familiar with all those technology, but essentially you just run that. It needs a Docker. You can feed the CDN logs. You can also host it somewhere. So if you host it and then you keep feeding those logs, and then you will see the CDN cache ratio, for instance, or miss or check those requests, but you still don’t have the capability of seeing those logs within New Relic.
Thank you.
Wonderful. I think we covered all the questions. So I also responded a few times to open a ticket as this is something we cannot sort out here.
Just double check the Q and A.
No, see more questions.
Wonderful.
So thanks again to all of you for taking the time to join today’s sessions.
There’s one more question. Oh no, it was just a comment. So we truly appreciate your participation and hope to see you again in the future Take care and have a great day. Thank you. Thank you. Bye bye.