Responsive Search Ads contain all headlines and descriptions and automatically show the best combinations in search results. Advertisers do not know which combinations are converting, only the overall metrics. Even with just two ads, one will inevitably gain a higher impression share based on the conversion goal. The lack of transparency and unequal ad serving prevents accurate testing.
The pay-per-click ad industry is always evolving. New features roll out nonstop — managing a campaign a year ago is likely different today.
Quality Score
Advertisers once tested ad components by running them against each other in the same ad group. To see which call-to-action, landing page, or ad copy worked better, an advertiser would create two ads, which Google would show evenly over time.
That’s no longer the case.
The score measures every keyword on a scale from 1-10. A higher number indicates consistency throughout the search process. For example, if a user searches for “oval coffee tables,” the ad and subsequent landing page should speak to the same terms. Keywords with higher Quality Scores generally have lower click costs over time.
A problem with Quality Score, however, is that it emphasizes the click-through rate more than conversion. A keyword could have a poor Quality Score but excellent conversions. Tweaking that keyword could improve the Quality Score and reduce conversions.
- Adding negative keywords,
- Inserting the target keyword(s) more frequently in the ads,
- Updating the landing page to sync with the ad’s message.
A/B Testing
Setting the campaign to manual bidding allows advertisers to control the cost (and the copy) for each variation, such as on an exact match keyword and .50 on a phrase match.
In the era of automation, Ad Variations are the most effective way to test components.
Ad Variations experiments disclose the overall performance of the version that achieved the best results. Click image to enlarge.
However, manual bidding is still occasionally helpful. For example, bidding above a certain amount on a set of keywords could be unprofitable for an advertiser. Manual bidding would set the max cost per click, trading the advantages of smart bidding for cost control.
Manual bidding allows for bid adjustments such as device and location, but smart bidding automatically adjusts for these items and more. The advanced machine learning that smart bidding provides is far superior to manual bidding. For example, smart bidding considers users’ browsers and operating systems.
Match Type Ad Groups
The answer is Ad Variations, which tests a base component of an ad against a trial, 50/50. To test landing pages, an advertiser instructs Google to replace that entity half the time. Advertisers cannot see metrics for each combination, but they can see whether the base or trial ad performed better.
For example, “oval coffee table” themed keywords would have required two ad groups with the same keywords. One contained only exact match keywords, while the other had phrase match. And importantly, all keywords in the exact match group would be negatives in the group with phrase match, allowing the advertiser to control which ads appear. Exact matches would show one set of ads, a phrase match the other.
But some outdated Google Ads tactics remain useful when tweaked. Here are four of them.
Manual Bidding
Google defines Quality Score as “a diagnostic tool to give you a sense of how well your ad quality compares to other advertisers.”
Creating ad groups by match type was common before match type variants and the phase-out of modified broad matches.