In the age of automation and smart bidding, does the “best practice” of segmenting keyword match types (Exact, Phrase, and Broad / BMM) by campaign or ad group really improve performance? A search on ‘search ads campaign structure best practices’ will result in a myriad of articles promoting various campaign structures, most of which lean towards segmentation.
This is a follow up case study to our previous post which tested this question. Previous tests showed that aggregation improved performance when using smart bidding. However, a couple of tests are not enough to definitively answer the question.
So we expanded the test to 44 campaigns across 10 accounts in order to have a robust data set to answer the question.
Google Ads features four match types (broad, broad match modified, phrase, and exact) that advertisers can select for their keywords to help control which searches can trigger an ad. Each match type functions differently, where ‘broad’ matches to the widest range of search queries and ‘exact’ to the narrowest range. Because of this, the same keyword set to different match types will see varied performance.
Advertisers adapted to these differences in match type functionality and performance through novel campaign structures. In general, broad match keywords are assumed to have the highest CPCs and lowest overall performance because they match to the largest volume of search queries (many of them irrelevant). In contrast, exact match keywords are assumed to have the lowest CPCs and best overall performance because it triggers ads that match the exact keyword.
Advertisers designed campaign structures to control the flow of search queries to specific match types. This is done through segmenting match types into separate campaigns/ ad groups, applying directional negative keywords between campaigns/ad groups, negative query mapping, single keyword ad groups, and many other structural practices.
However, over the years Google has rolled out various ‘close match variant’?updates?to these match types which have changed their functionality. ?A “close variant” includes searches for keywords with the same meaning as the keywords, regardless of spelling or grammar differences between the query and the keyword.
Through smart bidding, bids are set in real-time at the query level based on hundreds of audience signals. It is impossible for human advertisers to emulate what smart bidding is capable of because real time query level bidding is not even available to us. According to Google, “These algorithms factor in a wider range of parameters that impact performance than a single person or team could compute.” At Seer interactive, we have seen lots of success with smart bidding, even beating out third party bidding platforms such as Marin and Kenshoo.
And smart bidding has implications for campaign structures. According to the Google account structure playbook (this is not a public document, ask your Google Ads representative for a copy), “granular keyword segmentation is unnecessary as smart bidding is auction-time and factors various signals beyond keywords and the query entered.”
Because of this, Google recommends consolidating traffic into fewer and larger ad groups by removing avoidable traffic segmentations such as: keyword match type, device, geo, day, and audiences.
In other words, advertisers should remove unnecessary traffic segmentations in order to feed smart bidding algorithms as much data as possible.
As previously mentioned, our first two tests on this subject showed improvements in conversion volume, CPA, CTR, and CPC when match types were aggregated into the same ad groups.
For this study, we expanded the testing cohort across our agency to 44 campaigns from 10 different accounts.?We chose campaigns that had match types segmented by ad groups, were already using a smart bidding strategy, and had above average impression and conversion volume. We then created experiments (traffic split 50/50) and changed the campaign structure in the experiment campaigns so that all match types were present in the same ad group (ad groups were still segmented by keyword themes).
Our hypothesis for the test was that aggregation would outperform segmentation. This test ran for 60 days and we were surprised by the results.
- Teal = Segmentation Control
- Orange = Aggregation Experiment
- Purple = Difference between the two
Overall, we saw the experiment campaigns outperform the control campaigns on cost, conversion volume, CPCs, CVR, and CPA. These results seemed to confirm our hypothesis that aggregation would outperform segmentation, until we took a closer look at the data.
Slicing the performance by campaign, we saw that one account composed over 60% of all impressions in the entire test population. We then looked at the data by account and saw a mixed bag of performance. 50% of accounts saw improvements in CVR and only 33% of accounts tested saw improvements in CPCs and CPA.
This was not the definitive answer we were expecting, particularly considering the wide range of accounts, bid strategies, and keywords included in the test.?However, we have to trust the dataset over our mindsets.
One factor that was not consistent across campaigns was what smart bidding strategy the campaigns were using. Slicing the data by bid strategy showed another mixed bag of performance but we did see that aggregated match types performed worse for CTR and CPC when using eCPC or Max Clicks.
The aggregated ad groups are matching to more search queries than the segmented ad groups. Because of this, eCPC and Max clicks may have performed worse for CTR and CPCs because they are spending more on broad match keywords relative to the other bid strategies. In contrast, the other bid strategies (Max Conv Value – Target CPA) are optimizing to specific goals and are probably more selective on what match types they are bidding on.
So what campaign structure will result in better performance, aggregation or segmentation of keyword match types?
Based on our test results, we do not know for sure.
Performance seems to differ on an account by account basis. There are too many factors, such as smart bidding strategy, to consider on what will determine success when using one campaign structure versus the other. Overall, we encourage you to test this on your accounts and see what works best for you.
Google’s developments on match types and automation does make us wonder what is next for keywords and match types. With the advent of close match variants and smart campaigns like dynamic search ads (which dont even require keywords), it makes us question if match types will even be a thing five years from now.
If you are inspired to test this for yourself, let us know how it went!
Be sure to sign up for our newsletter to stay up to date on all things digital.