Blues Brothers Podcast

Q&A Answering Listeners Questions: LTV, Creative Testing & More

Nathan Perdriau & Sebastian Bensch Episode 18

In this episode, Nathan answers questions from listeners about budget allocation for retention, the growth of BlueSense Digital, the effectiveness of Advantage Plus, TikTok advertising, server-side tracking, and the importance of creative testing. He emphasises the importance of merchandising in driving lifetime value and suggests that technical competence is key to agency growth. He also provides insights into the volume of creative testing required for effective advertising.

Takeaways

  • Lifetime value is largely influenced by the merchandising and quality of the product, rather than marketing efforts alone.
  • Technical competence is crucial for agency growth and success in the e-commerce industry.
  • Advantage Plus can negatively impact profitability, but with proper structuring and understanding of its limitations, it can be made to work effectively.
  • TikTok advertising requires a high volume of creative content and may be more suitable for enterprise brands.
  • Server-side tracking can improve tracking accuracy, but its incremental value may be limited.
  • Small creative tests may not yield statistically relevant results, and it's important to focus on new, innovative ideas rather than minor variations.
  • The volume of creative testing required for effective advertising can be much higher than expected, especially as ad spend increases.


Chapters


00:00 Introduction
01:17 The Impact of Merchandising on Lifetime Value
15:35 Making Advantage Plus Work for Profitability
20:05 The Limitations of Server-Side Tracking
26:49 Moving Beyond Small Creative Tests
30:09 The Volume of Creative Testing Required for Effective Advertising
32:37 Conclusion

Welcome back to the Blues Brothers podcast. In this episode, I wanted to run through a q &a. Surprisingly, I get a lot of questions on LinkedIn and the Blue Sense digital YouTube channel. Despite these podcasts, and these YouTube videos only getting 50 to 100 views, they're 50 to 100 real people. And so I end up with about five to 10 people reaching out to me almost every week, either saying they love the podcast, which thanks, I appreciate it, or they have some kind of question. And so I thought I would collect all the best questions that I've received over the course of the last two weeks and bundle it into one podcast where hopefully this answers the questions of anyone that is listening. Now, one of these questions is to do with the agency. All the rest is to do with a calm. I know agency podcasts do not do well. No one wants to listen to them. So that will be the third question. I'll go through it quickly and I'll relate it into a comm. So don't drop off once we dive into that. Let's start straight in. Question number one, what's the best allocation of budget to retention and enabling better lifetime value as an e-commerce brand? I'm gonna give you an answer that you're probably not going to expect. And that answer is, lifetime value as marketers, we like to think that we have a lot of control over. We like to think that we can set up a fancy loyalty program, fancy email flows, campaign ideas. run 20 % of budget on Facebook towards existing customers. And we can then uplift our retention and a repeat purchase rate. At the end of the day, you can, there is incrementality there, but it's not huge. Lifetime value is generally baked into the merchandising of the brand. And so what I mean by this is you can have a CPG, consumer product goods brand, who is selling a terrible product, versus someone selling pencil cases and their premium, incredible pencil cases. you're still going to have better lifetime value on the CPG brand, even though the product is terrible compared to this one, just due to the nature of the purchasing experience and the merchandising of that product, people consume this product and then they need to buy another one. And so you're naturally going to get better lifetime value. Now you should standardize against your individual industry and see how you can improve it. But most improvements actually come from improvements in merchandising. And so in the pencil case example, You could go to a retention agency and you could say, Hey, I want to run campaigns. I want to run flows. I want to do a loyalty program. I want to do a special offer on Facebook and Google ads. It's retargeting existing customers. We want to put all of these efforts in. You might get a 10 % bump in returning customer revenue compared to why don't you just launch a second product. So you sell pencil cases. Why don't you just sell the pens that go in the pencil case? And then the second you do that, you'll see lifetime value about 50 % overnight. And you'll realize that all of your efforts shouldn't have been on trying to drive lifetime value through marketing. The immediate incremental improvement in LTV was available and just expanding the merchandise. Now you have to be very careful whenever you're expanding a merchandising or your product portfolio, as you don't want to dilute it to low profit contribution products, it's then going to skew the direction of your brand, you want to maintain brand continuity across those future product launches. However, that's one of the best ways normally to improve lifetime value. So when I get the question of what's the best allocation of budget to retention, the only real good answer here is for you to be testing it systematically over time. And for you to be doing a large budget swings to be able to test it, to be completely frank, and people don't like doing this, but this is what you're going to have to do if you want an answer, which is let's say you're the pencil case brand. and you're currently running 30 % of your budget on Facebook and Google towards existing customers. I would recommend dropping that to 10 % and looking at new customer revenue week on week like a hawk to see if it drops. If it doesn't drop, you've just proven that you're wasting 20 % of your budget on existing customers who weren't going to buy regardless. And what you always have to be careful about whenever you're allocating advertising budget to retention is twofold. Number one, you're obviously taking away from customer acquisition, but number two is you are now adding costs to the repeat PNL. And this starts to go into a whole nother discussion as to how you want to view e-commerce accounting and how you want to segment out profitability across budget allocation. But you can take the perspective in the view and I like to take this internally and we have an internal model that showcases our sets of brands is you could view an e -commerce brand as you have a first purchase PNL and then you have a repeat purchase PNL. And so your first purchase P &L is all your revenue just from new customers. And your costs are all of the associated variable costs plus direct marketing expenses to acquire those customers. So if you're spending $100 ,000 a month, but only 70 % is going to acquisition, you would put 70 % of the advertising budget. And then operating expenses is the tough one to split. So you can normally just leave that out and go to contribution margin three. And then you look at the repeat panel. And so you have repeat purchasing revenue, all the associated variable costs, and then 30 % of marketing. And then you can look at how profitable both of these different cohorts are that are summing to make the actual revenue of the business. And what you'll end up finding is that if you allocate 30 % of your budget towards remarketing, you are squeezing the profitability of your repeat P &L like nothing you've ever seen before. Right? Let's say your repeat P &L, repeat P &L being your profit and loss, but we've just segmented down to repeat purchases. Let's say that's driving you $50 ,000 in profit contribution per month. And so for those that don't like me using random terminology, profit contribution is profit, but we haven't minused off operating expenses yet. So we're nearly at net profit, we're nearly at EBITDA. We just haven't minused off OPEX. And so let's say you're making $50 ,000 in profit contribution, but you have $20 ,000 going towards repeat customers. If you were to cut that 20K, if that 20K wasn't actually driving incrementality, which in most instances, it probably isn't. you would immediately jump from 50 K to 70 K profit on that repeat panel, 50 % uptake and profitability overnight because you were over allocating to retention. Here's another caveat. And before we move on to the next question, which is that in most instances, a customer will come back and buy regardless of the marketing efforts put in place, if the product is good enough and if the merchandising suite allows it. And so like you can repeat. target people all you want. But if you've already sold them a pencil case, and it sits on their desk every day, and they look at it, when they go to buy a pen, they'll probably just look at the pencil case and go, does this brand sell that product? they do. I remember when I was purchasing that they did have that it's cross sells upsells, they've sent me a couple emails about it, I'll just buy it from them, right? I'll keep that continuity. And so is that to say that the emails didn't drive incrementality? Well, they did in that instance, right? It was still important to stay top of mind. But was it important to continue to spend money on me on Facebook and Instagram ads for the next six months until I decided to buy a pen? Probably not. Question number two, we made two videos on the BlueSense digital YouTube channel. I got a couple comments about this. The first one was how advantage plus is hurting your profitability and essentially a hit piece on advantage plus and why you shouldn't be running it and blah, blah, all these reasons as to why if you haven't seen that video, I highly recommend watching it. I think it's actually one of our best. And then about two weeks later, I made another video, which is how to structure an advantage plus campaign. And so people were a bit confused. They went, why did you make a hit piece on advantage plus? And then you made a follow -up saying, Hey, anyway, here's how to structure it and run it. Here's a caveat. And this applies to any campaign type really, but predominantly advantage plus and performance max now, which is the fact that advantage plus does ruin your profitability. It does. And there's multiple reasons as to why. And we go through that in that video. default conversion goal is maximize conversion. So it's not splitting budgets effectively across profit contribution. In there, we also have about how it heavily retargets existing high intent customers. And so you're not really getting much prospecting in there unless you put some measures in place, et cetera, et cetera. So there's a bunch of things that are wrong with advantage plus that if you were a, for no better word, a rookie advertiser and you went in and just clicked create dump some creatives in and said yes, that's all good, you'd be hurting your profitability, your P &L would genuinely worsen from doing that. However, that's not to say that you can't run Advantage Plus and make it work for your P &L. That's not to say you can't run PerformanceMax and make it work for your P&L. You just have to structure these things correctly and firstly understand everything when it comes to why they're hurting your profitability. If you can first acknowledge, and that's why we put those videos out in that sequence was first acknowledge that Advantage Plus, is hurting your panel. There's a bunch of incentive structures within that campaign type that are destroying the profit contribution of your brand. However, now that we understand those and we understand the mechanisms, here's how you can structure it to prevent that from happening. And so it's first being able to identify what's wrong and then you can structure accordingly. And so that follow up video went into exactly how to structure based on. Profit contribution categories, segmenting it at an advantage plus level, making sure that you're running dedicated retargeting to suppress retargeting within the campaign, having audience exclusions and suppressions. The list goes on. So we went through all of these measures that you need to put in place and then advantage plus works really well, really well. So run it, absolutely run it, but make sure that everything else is in place. And if you're a, like, this is where people get really burned because they're rookies when it comes to, running Facebook and Google ads. And so they just throw default campaigns up and have no conceptualization of what's actually happening. And then they wonder why their P &L is sitting at $0. And I want to stress like DTC e -commerce is a very difficult space. People think that people just make econ brands and then print money. It's not true. Like of the audits that I've done this month, and I've done probably 14 to 15 and we're about 15 to 16 days into the month. 80 % of those brands don't make money. The 20 % that do are probably rolling at 15 to maximum 20 % EBITDA and haven't really scaled in the last six months. And so... There's some others in there as well that are probably at 25 % of it are in a much larger companies, but that's the general reality. And the reason for that is that it's very difficult to make DTC e -commerce work. It's very much so becomes a finance game and it becomes a game of how do we acquire customers for as long as possible in the maximized CLTP and people aren't very good at that. And it's very difficult to do. And so just making sure that you understand the incentive structure of these individual campaign types and structuring accordingly is important. So is advantage plus bad? Yes. But can you make it work if you structure right? Yes. The next question and this is where I think we're going to lose people average view duration on these podcasts is 11 minutes, we're at the 11 minute mark. And we're about to touch on a topic that people hate and gets the worst views across all podcasts. So please don't drop off and prove me wrong here. Question is, how did BlueSense Digital grow so fast with having no agency experience prior? And sort of provide some context to that question that I got. Sebastian and myself, the co -founders of BlueSense, we had never worked in an agency before. In fact, this is going to sound crazy, but when we started BlueSense, I didn't even know what an agency was. I'd never heard the term agency. I didn't even know what it was. I just said, I really like running Google and Facebook ads for my own e -commerce brands. I hate everything else. Can we just do that for other people? Is that a thing? And then we went out and we tried to tell people, hey, Nathan has spent hundreds of thousands of dollars of his own money on Facebook and Google ads very profitably. Can we do this for you? And then slowly over time, we figured out how to get a client. And then the company grew from there and we very quickly pivoted into heavy operational and financial focuses within paid media as it became pretty evident pretty quickly that. that the gap in growing most ecommerce businesses is not in scaling an ad account, it's in scaling the business. And so we very much see BlueSense now is not a scaling paid media or scaling ad account business like everyone else is here in Australia. We scale ecommerce companies, we scale businesses. And we do that through aligning financial operations with paid marketing. Well, this goes on. Anyway, how did we grow so fast? I truly believe and this has changed over time. At the start, I believe that we grew so fast because of our obsession with service delivery. We were obsessed. And the reason why we were obsessed is pretty obvious. You can't lose clients or you don't grow. Sounds pretty simple. But if you are if your business model is reliant on retention and you don't retain, good luck. And so in the early days when me and Seb had one client, then two, then three, then four, and it was just us and then five and then six and then seven, it was if we lose one, because we're acquiring customers so slowly, and there's so much effort going into acquiring each one of these clients, we're taking such a huge step back, that the focus should absolutely be on how do we get more clients, but the focus more shows should be on how do we retain these clients and make them believe that we're the best in the entire industry. And so that's what we focused on. We have, We implemented dedicated Slack channels, update videos, daily update messages in Slack. We built out reporting and dashboards. We had checking calls with very strict agendas. We had notes prepped before. We had agendas sent in. The SOPs just grew and grew and grew and grew around SDS to make sure that we had the best service delivery possible. And then as an extension, that was in account management. So we became incredible, to be completely honest, at managing accounts. managing any escalations and all this goes on, but also technically the actual technical output. And now these days, I believe that that isn't really the case in terms of, I don't think that's the reason why we have still such incredible retention and why we're continuing to grow. I think it's because of the actual results that we drive. And in the early days, we weren't sure whether that was the case or not because Obviously, I drove a bunch of results for my e -commerce brands, scaled four businesses to seven figures through Google and Facebook ads. And then we were starting to do it to other brands, but we didn't have enough time. Like you don't scale a brand in two months, three months, four months. So we were still building up time and applying everything that I knew onto these accounts to grow them. And so we didn't have the confidence yet that our technical ability was the reason as to why we were retaining. And so that's why we went so hard on account management and so hard on having perfect SDS. service delivery standards. Now I am confident that it's just technical ability. And now I'm going to go as far as to say that I think technical ability is the only thing that matters. And I think that it's the reason why we will continue to grow and it's where we are tripling down as a company. And particularly if you look at the agency space within Australia, sorry if there's anyone from agencies in Australia listening to this. There are no large agencies that are setting an example of high technical ability that's then pushing out incredibly talented individuals. It doesn't exist. And that's what we're trying to build. And for now we have to hire global. We can't, it's very hard to hire in Australia. We do hire in Australia. And if you're in Australia and you think you're a fantastic performance marketer, please reach out to me. However, it's very, very tough because there is no training vehicle within Australia. that is pumping out incredibly talented e-commerce operators that know how to scale marketing. Doesn't exist. And so we're creating this. Now, the lucky thing is that there's a bunch of agencies that do this international. And so they're really who we hold ourselves standard to. And so when we're looking at how good do we need to be technically, we don't look around in Australia and go, okay, this is where they are and they're a...$30 million agencies. So that's where we should be in terms of service delivery. Unless just crank sales. It's no, let's look at the best in the absolute world. And let's, that's the bar. And we need to get there. And I think that will always win in the long term, because bad word of mouth will crush a business incredibly quickly, just like good word of mouth will grow a business exponentially. And so if you actually know what you're doing, and you can actually drive results, you will win on a long enough time horizon. You don't even need a sales team. Like you'll just win because you don't lose clients and then you continue to get referrals. And so you just, your growth is forced upon you if you provide incredible value to the market. And so how do blue sensors do grow so fast? We have had an immense focus for the last three and a half years on technical upskilling and becoming some of the best e-commerce operators in the country through. our existing client database as well as continued educational learning internally. Number four, what are your thoughts on TikTok? There's going to be a quick one. Probably don't run it unless you're an enterprise brand. Reason being is it demands, it doesn't require, it demands huge creative volumes. And most small e -commerce brands, they don't create creative and enough volume to be able to sustain scaling on TikTok. And so... If you're spending $80 ,000 a month as an Australian eCommerce brand, consider TikTok. Probably layer TikTok in with five to 10k spend and start scaling it. You can become profitable on TikTok, by the way. We have clients spending on TikTok that we manage and it is profitable and we scale it month on month. It's just that we're lucky enough that those particular businesses, the founder loves creating content. And so we can brief in content and every week we have three to four new pieces that we can test. And so it allows us to quickly iterate and scale. But without that, it's very, very difficult to scale TikTok, for example, off stills. I know people do it and there's always gonna be someone out there and there'll be a YouTube video that'll say how I scaled off stills on TikTok. But as a generalization, your efforts are better focused on matter and Google. Number five, I get asked this a lot. Will you build a course? I've been asked this probably 50 times. No. Next. No, no plan on building a course. We'll build a community one day. for smaller e -commerce owners and operators that we can't service due to just their revenue being too low. I think that would be a fantastic idea. And it would be relatively low cost and would obviously give everyone access to myself and then other fantastic talent within BlueSense who can facilitate community calls, et cetera. But it's not something we're focusing on. We continue to focus on continued up-skilling in -house on e -commerce technical ability. and continuing to improve the agency. So no, but that is on the time horizon for maybe six to 12 months from now. Number six, what are your thoughts on server -side tracking? Server -side tracking is an interesting one. So there's a bunch of software solutions at the moment that enable better server-side tracking. And I won't go into the technical nuances of how this works. And if you want to know how server -side tracking works, literally just Google how does server -side tracking work, meta ads or something like that, and you'll get some videos on it. Does it work? Yes, it should theoretically work. How trackable is it? Very untrackable. It's funny, right? Where server -side tracking, the whole point in it is your tracking isn't good, and so our app will improve your tracking. However, I can't even track the delta and improvement after we implement server-side tracking. So like, what is going on here? And the reason why I say that is that, The incremental value in having server-side tracking in a third -party tool here is so that you can measure the improvement in efficiency and performance at an account level or at a business level to then identify whether that investment was worth it. I have seen no real incremental difference after implementing server -side tracking on a bunch of brands. And maybe it's because it's very small. Maybe it makes a 1 % difference or a 2 % difference. And I do agree that the quality of the data that you feed into Facebook and Google is critically important, critical. And a lot of my audits are focused on this. If you've ever had an audit on your backend, Google and Facebook had accounts for me. I spend like 10 minutes talking about this, which is campaigns use historical conversion data to create models of that data based on all the psychographic data points of those individual users that were tracked to convert and then uses that as a predictor of bid adjustments moving forward in time on Google or how aggressively they want a bid adoption on Facebook. And so that is fundamentally how these machine learning models work in an individual campaign level. And so a lot of the recommendations and the structures that we do internally are based around that concept. But the fact that server -side tracking could enable more conversions to be tracked. So you'd have more accurate modeling because you'd have larger statistical relevancy in the data set. And then in addition to that, you can filter out bad conversions. So you only have good conversions pushing through and you can segment by new customers versus returning. Fantastic. right, you could be in just the best quality data. Still haven't seen it really provide any incremental value. But once again, maybe it's one to 2%. There is a cool feature, which is you can pull in, you can fire the pixel dependent on new customer type or returning customer type. And so you can get a bit of an idea and platform of where returning customers are getting attributed and where existing customers are getting attributed. And you can start to identify and better analyze, advantage plus is actually heavy returning customers. Okay. If we want to improve crossbacking, we should probably do X. So it does actually facilitate better decision making, given that it's providing you with insights at that level. But service side tracking just for better tracking, I don't know. Do it, try it. See if you get results, reach out to me if you do, but we haven't seen anything huge. And if there has been improvements, it's been single percentages. What are your thoughts on small creative tests? Had someone asked this the other day and then off the back of that, they also said how much creative testing should I be doing? And so that'll be the last two questions that we'll tackle here. How much creative testing should I be doing? Let's start with that. I actually made a sheet to be able to visualize this. I know that anytime I pull up a sheet, I get about seven people on LinkedIn DMing me saying, please give me that sheet right now. Short answer, no. This is an internal resource, I won't be giving this out to people. But I'll show you what any man you can build in like two seconds. So let's pull this up. So it's a really basic concept is really just simple formulas. So the concept here is how much budget you want to allocate towards testing, which means what percentage of your meta budget is going towards scaling. Now this is going to be completely dependent on whether you have assets that are working on up. Now assuming that you do have assets that are working and you're spending on them and you want to put 30 % of budget into testing. Next, what is your blended CAC? Let's say it's $50. Then what's your statistical relevancy factor? So what I mean by this is assuming that you're not using bid caps only across Facebook and you're in the in the big cap club, which is that you just let big caps determine whether creatives work or not. And you want to actually achieve statistical relevancy on a creative test, you need to assign a statistical relevancy factor. And so what factor of a 50 blended CAC would you assume is statistically relevant? to have had performance metrics. And so if we're maintaining a sub $50 capped across 15 purchases, you might consider that statistical relevancy. And so we want to put $750 into each individual creative test. You then say you're starting in your increments and then you get the output here on creative test per month, creative test per week. And so you can see that if you're a brand spending, and this is really eye opening for a lot of brands, which is why I built this in the first place. If you're spending 7 .5 K per month, And you don't really have anything that's working. So you have a higher allocation towards testing and Statistical relevancy really should be like a 10 here You need to be testing about nine creatives a month. And so you have a 7 .5 K budget nine creatives a month. That's two creatives per week If you want to go and scale up to 40 K a month in budget you need to test 51 creatives per month crazy right creative volume required to sustain scale on these platforms, particularly in Australia where it's a smaller geo and you're going to re -serve to the same users over and over again, you need huge creative volume. Now given, once you've reached this kind of volume of spend, your allocation towards testing will probably drop as a consequence. So let's say it does drop down. And so at these levels, you need to be testing about 20 to 30 creators per month. So way higher than most people think. And then obviously you can play with statistical relevancy and you can decrease that and be more aggressive if you can create content cheap and you don't want to properly test it. There's a lot of nuances here, which is why no one can give you a direct answer. Everyone wants to know the answer. How much creative should I test? I tried to do it here, but this should hopefully give you a bit of an idea, but it's going to be very relevant on a few different factors. The follow up question there was, what are your thoughts on small creative tests? And what this person meant was, cause I showed them that I showed them, Hey, look, this is the kind of volume you need to be doing. Good luck. And then they went, so what's deemed as a creative test in that calculated that. is can I split test some different colors of backgrounds? Can I just change some hooks? No, that's 20, 30, fresh new creative, new scripting, new asset completely, nothing to do with an existing one. And they went, that's crazy. Do you do small creative tests? And my answer was no, not really. We don't really do them. And the reason why is, the reason why is twofold. And I think you'll find this interesting. Number one is that if you're spending under $100 ,000 per month, split testing the color of a background, is that really going to improve the performance of the brand? And you can go into consumer psychology and go, okay, well, red backgrounds generally outperform blue. And so we might want to go red and it also better fits with the continuity on the website. And so people might have better convert cool, but is that going to really move the needle or should we just make a better creative and have a new idea? In my opinion, new creative, new idea. I don't think split testing colors really has any incremental value. And then the second fold of this is that anytime you run a creative test, this is a really important concept. You were serving to a completely random audience with hundreds of variables that are changing every single second and no variables are over constant. And you can know this and you can understand this by go and. take one of your creatives and launch it in an ad set and then launch the exact same creative in another ad set. Let them run side by side. What you'll always see is one will perform drastically different to the other. How? It's the same creative serving to the same audience. Why are we getting such different results? It's because there's thousands of different variables in real life that are changing, right? So if number one, you're not serving to the same people, it's completely different audiences. Number two, what happened yesterday in the macro environment probably changed today. Number three, every single individual user that you serve to was going through three customer journeys simultaneously at all times. And so the users that this ad set might have served to were going through some customer journeys that made them more inclined to buy from you. Maybe they were looking for pots and pans and you sell pots and pans and so you captured them. But then this one you served to a bunch of people and some of them were, but then some of them were. not currently in market for pots and pans and they were looking for glasses and Tupperware and you don't sell that. And so there are literally millions of variables. And so when you try to take a scientific approach to creative testing at such a small variable level, you cannot get statistical relevancy because the foundation of scientific testing is repeatability. If you run a test in science, the whole thing is anyone else should be able to repeat that test. and get the exact same outcome. And if that's true, then it trickles down into law. That's where you start to get laws within science. But if you can't repeat a test, then it's not relevant. It doesn't apply. It's not a fundamental truth. And so when it comes to creative testing, you have to be very, very careful at what level you're testing at to where is this repeatable in the future? And so if you're split testing images versus videos, What says that the image that you test was just terrible in comparison to the video? And that images would work, you just tested a bad image. Probably pretty high, right? And so if you wanted to make a really large assumption that videos work better for this brand than images, you'd better test a ton of videos and a ton of images over a very long time horizon before drawing that kind of conclusion to then use that moving forward in time. If you're gonna run a color test on this... red background versus blue background, what performs better? That is quite frankly ridiculous to draw a conclusion applied into future creative iterations. Because once again, you can take the same creative, run it side by side, and you will get drastically different results. And so what I would recommend is run the red versus blue. If you're running different color backgrounds or doing this small creative test at the moment, take those creatives, run the test, and then wait a week and run the test again. And then wait a week and run the test again. every single time you get a different result. So why are we drawing conclusions from the first one if we know it will never carry into the future? And so running small creative tests, you need to be very, very careful because you're likely just not drawing statistical relevancies. You shouldn't be using that data moving forward in time. Is that to say that you shouldn't test like slightly different hooks on videos? No, I think it's a great idea to test hooks. And I think a lot of videos that stop working can be fixed. If you have a really strong video creative asset, You can normally just mix and match the hook at the start for years and continue to get that creative to work. And I could give you hundreds of Adacan examples of that who are currently spending two, $300, $400 ,000 a month. And they're just running one creative asset that they made a year ago and just mix and matching hooks over and over and over again, and then running into the regular asset. It works incredibly well. However, with that being said, if you run a bunch of different hooks on an ad, and then you run it in an ad set, be careful on whether you're drawing conclusions as to whether that sustains into the future. Because the reality is that you could probably run that test now and then run the same test in one month's time from now, and you'll get drastically different results. And so should you be drawing that conclusion or should you just continue running that ad set, turn the one set off, and then not use that in a framework for future creative testing? Hope this was helpful. We touched on LTV and how merchandising is more important than marketing efforts. We talked on Advantage Plus, how it's bad, but it's good. You can probably use it. How BlueSense Digital grew so fast and how that ties into e -commerce, which is that the focus in a service -based business is technical competence. How does that translate to e -com? Having product competence, having an incredible product will always yield to better repeat rates, better LTV, better acquisition, better word of mouth. What are your thoughts on TikTok? Don't run it unless you're spending a lot. Will you build a course? Nope. What are your thoughts on server side tracking? Haven't seen incremental improvements, but there's some cool features in there. If you're pulling a new customer acquisition, what are your thoughts on small creative tests? You just heard my thoughts and then how much creative testing volume should I be doing ideally as much as possible or a lot more than you believe you need to be doing. With that being said, I hope this was a helpful podcast and I'll see you in next week's episode.