Most Amazon Ads failures trace back to the same handful of preventable mistakes. This guide identifies the 14 most costly errors KDP authors make, explains why each one hurts performance, and shows you exactly how to fix or avoid them.
| 11-minute read | Intermediate |
Amazon Ads can compound returns for KDP authors — but only when set up and managed correctly. The technology works; the mistakes are almost always structural, strategic, or rooted in misunderstanding how attribution and bidding actually function. After analysing patterns across hundreds of KDP advertising accounts, the same errors appear repeatedly: bids that starve campaigns before they can gather data, reporting that is never checked, keywords that waste budget on irrelevant searches. This guide names each mistake clearly, explains the mechanism behind the damage, and tells you how to correct it.
Mistake 1: Launching Ads on an Unoptimised Listing
Paid traffic amplifies whatever is already on your listing — good or bad. If your cover does not immediately signal genre, your title is vague, your blurb fails to create desire, or your reviews are sparse, ads will generate clicks that do not convert. Every click that does not convert costs money and worsens your ACoS without the upside of a sale. Many authors diagnose this as an ads problem when it is really a product page problem.
Before spending a pound or dollar on advertising, audit your listing through the lens of a first-time visitor: does the cover look professional and genre-appropriate? Does the title or subtitle communicate what the book is and who it is for? Does the blurb open with a compelling hook rather than backstory or author biography? Are your Look Inside pages clean, properly formatted, and engaging? Is the price appropriate for the category? At minimum, you need a professional cover and a well-crafted blurb before ads are worth running. If your manuscript underwent professional editing and proofreading, your interior quality will also hold up under scrutiny — the Vappingo manuscript proofreading service is designed specifically for authors preparing to publish.
The practical test: check your organic conversion rate before launching ads. Go to your book’s product page and review your reviews — are there at least a handful of genuine positive reviews, or is the review count zero? A book with no reviews typically converts at a much lower rate than a book with 10–20. Ads spend on a zero-review book is often better delayed until you have seeded initial reviews through ARC readers or launch teams. Once your listing is optimised and you have social proof, ad spend converts rather than evaporates.
Mistake 2: Setting Bids Too Low From the Start
Underbidding is one of the most common beginner errors, and it creates a particularly frustrating outcome: campaigns that gather almost no data, appear to “not work”, and get paused or abandoned. Authors set bids at £0.10 or $0.10 because they want to minimise risk, but these bids are often far below the competitive threshold needed to win any auction in their category. The result is near-zero impressions, near-zero clicks, and no information about whether the campaign could actually be profitable.
Amazon provides suggested bid ranges when you create a keyword or targeting target. These ranges are based on actual recent auction activity for that keyword. If the suggested range is £0.30–£0.70, a bid of £0.10 will rarely win impressions except on the most obscure long-tail searches. For a new campaign to gather meaningful data within a reasonable timeframe, start bids at or slightly above the midpoint of the suggested range — not at the floor. You can always lower bids once you have data showing which keywords are converting efficiently.
The underlying principle is that Amazon Ads data is expensive: every data point costs a click. Starting with competitive bids means you spend more in the early learning phase, but you gather useful information faster. Starting with bids that generate no impressions means you spend nothing but learn nothing either. Two weeks in, you are exactly where you started. Budget a genuine test amount — typically £30–£50 or $40–$60 per campaign for the first 30 days — and bid at a level that actually generates impressions. Use KDP Rank Fuel’s Amazon Ads Generator to research competitive bid levels before launch.
Mistake 3: Skipping Automatic Campaigns Entirely
Some authors skip automatic campaigns because they want full control over targeting and assume automatic targeting is unsophisticated. This is a mistake. Automatic campaigns are your discovery engine — Amazon’s algorithm surfaces your ad against queries and product pages that its data suggests are relevant, including terms you would never have thought to target yourself. Many of the most valuable keywords in a mature manual campaign were first discovered through automatic campaign data.
Automatic campaigns have four targeting sub-types: Close Match (queries closely related to your book’s keywords and metadata), Loose Match (more broadly related queries), Substitutes (readers viewing similar books), and Complements (readers viewing complementary products in your genre). This breadth means automatic campaigns surface both tight matches and unexpected opportunities simultaneously. Running them continuously at a modest budget — even £5–£10/day — provides an ongoing stream of real search query data that feeds your manual campaign refinement.
The correct use of automatic campaigns is as a data generator, not necessarily a primary revenue driver. Run them continuously. Check the Search Terms Report weekly. Promote converting search terms to exact match in a separate manual campaign. Add irrelevant terms as negatives back in the auto campaign. This harvest-and-scale loop is the foundation of a growing, improving Amazon Ads account. Authors who run only manual campaigns forgo the discovery phase entirely and typically exhaust their initial keyword list within months.
Mistake 4: Never Running the Search Terms Report
Setting up campaigns and then leaving them untouched is possibly the single most expensive mistake in Amazon Ads. Without regularly reviewing the Search Terms Report, you have no visibility into what actual search queries are triggering your ads. You may be spending significant money on impressions for searches that are completely irrelevant to your book — searches that will never convert — while missing the opportunity to double down on the queries that do convert.
The Search Terms Report is available in the Reports section of the Amazon Ads console: Reports → Create Report → Sponsored Products → Search Term. Run it weekly for any active campaign. Download the CSV, sort by spend descending, and review every term with 3+ clicks. Categorise each term: converters (generated a sale at acceptable ACoS) go into your manual exact match campaign; clear irrelevancies (wrong genre, wrong product type, unrelated concepts) become negative keywords; undecided terms (clicks but no conversion yet) get another week before a decision.
Authors who run this report weekly and act on it consistently see ACoS improvements of 20–40% within 60–90 days as wasted spend is progressively eliminated and budget concentrates on proven converters. Authors who never run it stay flat or see gradual deterioration as poorly performing terms accumulate spend unchecked. Fifteen minutes per week on this report is the highest-leverage action in Amazon Ads management.
Mistake 5: Adding Too Many Keywords at Once
More keywords do not mean more sales — they mean more diffuse data. When a campaign contains 200 keywords at launch, each keyword receives a tiny fraction of the total budget. At £5/day, 200 keywords averages £0.025 per keyword per day — nowhere near enough to gather statistically meaningful click data within a reasonable timeframe. After 30 days you have scattered, thin data across all 200 keywords and still cannot tell which ones are genuinely effective.
A more effective approach: launch manual campaigns with 20–40 tightly themed keywords. Group related keywords in the same ad group so Amazon can identify topical relevance. Set bids at competitive levels and let the campaign run for 3–4 weeks before analysing. With fewer keywords and the same budget, each keyword accumulates more meaningful click data faster. You can then make informed bid adjustments — raising bids on what works, pausing what does not — rather than trying to read noise across hundreds of thin-data keywords.
If you genuinely have identified 150+ relevant keywords through your research, organise them into multiple campaigns by theme or intent rather than loading them all into one. A campaign for broad-match author name targeting behaves very differently from one targeting specific genre keywords. Separating them gives you cleaner data and cleaner bidding control. The goal is not coverage — it is insight that leads to deliberate action.
Mistake 6: Using Only Broad Match Keywords
Broad match casts the widest net: Amazon can show your ad for any query it considers loosely related to your keyword. Broad match is useful for discovery — finding variations you did not anticipate — but it is the most expensive match type because it generates the most irrelevant impressions and clicks. Authors who run every keyword on broad match burn through budget on tangential searches and never convert the investment into organised, efficient targeting.
A mature campaign architecture balances all three match types. Broad match runs in an automatic or dedicated discovery campaign. Phrase match runs proven themes with some flexibility. Exact match runs the specific, validated converting queries you have identified through the Search Terms Report at higher bids where you want maximum impressions. Using only broad match means you are permanently in discovery mode with no harvest — finding new terms but never crystallising your wins into efficient exact match campaigns.
The practical correction: after 4–6 weeks of automatic campaign data, build a manual exact match campaign containing the top 10–20 converting search terms you have found. Bid aggressively on these terms — you have evidence they convert — while leaving broad and phrase match running at lower bids for continued discovery. This architecture is more complex than a single broad-match campaign, but it dramatically improves overall account efficiency.
Mistake 7: Ignoring Negative Keywords
Negative keywords prevent your ad from appearing for irrelevant searches. Without them, Amazon’s broad and automatic targeting will inevitably surface your ad for queries that share vocabulary with your book but have entirely different intent. A thriller writer targeting “serial killer fiction” may find their ad appearing for true crime documentaries, news content, or psychology academic searches. These are unwinnable — they will never convert — but they consume budget on every click.
Build negative keyword lists proactively, not just reactively. Before launching, add obvious negative terms: competitor author names you do not want to target, genre adjacencies that are not quite your genre, format-specific terms that do not match your offer (if you are selling a Kindle ebook, consider whether “audiobook” or “hardcover” queries convert). Then add negatives reactively from your weekly Search Terms Report review — any term with multiple clicks and zero sales that is clearly irrelevant should be added as a negative immediately.
Apply negative keywords at the campaign level for broad-reach irrelevancies, and at the ad group level for more nuanced filtering. Negative exact match prevents your ad from appearing for that precise query. Negative phrase match prevents it from appearing for any query containing that phrase. Building a negative keyword library across your account — reusing learned negatives as you launch new campaigns — means each new campaign starts with fewer waste terms from day one.
Mistake 8: Judging Campaigns Too Early
Amazon Ads have a learning period. When a campaign is new, Amazon’s algorithm is still calibrating which placements and audiences generate the best results for your targeting. The first 7–14 days of any campaign often show higher ACoS and lower conversion than the campaign’s eventual steady state. Authors who check performance after three days, see an unflattering ACoS, and pause or restructure are making changes before meaningful data exists.
A reliable minimum data threshold for decisions is 15–20 clicks per keyword for keyword-level decisions, and 30 days of run time for campaign-level strategic decisions. A keyword with 5 clicks and no sale has not yet generated enough data to conclude it will not convert — statistically, a keyword with a 1-in-10 conversion rate could easily show zero sales in 5 clicks by chance. At 20 clicks with zero sales, the picture is much clearer.
This patience principle extends to ACoS interpretation. If a campaign shows 200% ACoS in week one, that may simply reflect the data-gathering phase — impressions are being bought, initial clicks are coming in, but conversions follow a distribution that takes time to appear. Compare week-4 ACoS to week-1 ACoS before concluding the campaign structure is wrong. Many campaigns that look like failures in week one are breaking even or profitable by week six as the algorithm learns.
Mistake 9: Making Too Many Changes at Once
When a campaign is underperforming, the temptation is to adjust everything: bids, budgets, match types, keywords, negative terms, and ad copy, all in the same session. This creates an attribution problem — if performance improves (or worsens) in the following week, you cannot identify which change was responsible. Systematic improvement requires systematic testing, which means changing one variable at a time and waiting for the data to respond before making the next change.
Prioritise your changes. In a weekly optimisation session, the correct order is: first, add new negatives from the Search Terms Report (waste elimination, clearly beneficial); second, adjust bids on keywords with sufficient data; third, pause genuinely underperforming keywords with 20+ clicks and zero conversions; fourth, promote new converters to exact match. Do not simultaneously restructure campaign architecture, change match types wholesale, and add 50 new keywords in the same session. Let each wave of changes settle for 7–10 days before the next batch.
Keep a simple change log: date, what changed, outcome. After a few months of disciplined logging you will have a personal evidence base about what changes tend to improve your specific account. This is more valuable than any generic Amazon Ads advice because it reflects your actual books, category, and audience.
Mistake 10: Not Understanding ACoS Targets
ACoS (Advertising Cost of Sale) is ad spend divided by ad revenue multiplied by 100. A 40% ACoS means you spent £40 to generate £100 in ad-attributed sales. Whether 40% is good or bad depends entirely on your breakeven ACoS, which is your royalty divided by your list price. If your book earns £2.10 in royalties on a £2.99 price, your breakeven ACoS is 70% — meaning any ACoS below 70% is profitable. Many authors target far lower ACoS than necessary and as a result under-bid, generating insufficient impressions to grow.
The mistake compounds for authors in Kindle Unlimited. Their royalty calculation includes KENP (Kindle Edition Normalised Pages) reads, which are not captured in standard ACoS reporting. An author in KU might show 100% ACoS in the console (ad spend equals apparent sales revenue) while actually being profitable when KENP reads from ad-driven page reads are included. Without accounting for KU income, ACoS targets are miscalibrated — authors in KU should typically accept higher apparent ACoS because a meaningful portion of their income is invisible to the metric.
Calculate your breakeven ACoS before setting any campaign targets. Then decide your strategy: if you want to grow visibility and are willing to invest in rank, you might run at or near breakeven ACoS for a launch period. If you want profitability from day one, target ACoS comfortably below breakeven. Neither is wrong — they serve different goals. What is wrong is applying an arbitrary ACoS target (like “under 30%”) without knowing what your specific economics require.
Mistake 11: Pausing Everything When ACoS Spikes
ACoS spikes are normal and do not always signal a structural problem. Seasonality, competitor activity, Amazon algorithm changes, and random variance in conversion rates can all temporarily elevate ACoS. Authors who pause all campaigns the moment ACoS rises above their target often interrupt campaigns that were delivering long-term value — and then restart from scratch, losing any algorithmic learning the campaign had accumulated.
Before pausing a campaign after an ACoS spike, ask: over what time period? A spike over 7 days in a campaign that has been efficient for 3 months is very different from 3 months of consistently poor ACoS. Is the spike isolated to specific keywords, or is it account-wide? Account-wide spikes often have an external cause (competitor promotion, category saturation, seasonal effect). Keyword-level spikes are addressable through bid reduction or pausing the specific keyword rather than the whole campaign.
The general principle: campaigns with established track records of efficiency deserve more benefit of the doubt during short-term performance dips. Use a 30-day rolling average rather than week-to-week snapshots for campaign-level decisions. Reserve pausing for campaigns that have been given a genuine data-gathering period (30+ days, 50+ clicks) and still show consistently poor ACoS with no sign of improvement. In most other cases, adjust bids rather than pause.
Mistake 12: Ignoring Organic Rank Gains
Amazon’s algorithm weighs sales velocity when determining organic ranking. Every sale your ads generate — even at break-even or slight loss — contributes to your organic visibility. Authors who evaluate ads purely on immediate ACoS miss this compounding benefit: improved organic rank means more sales that cost nothing per click. The TACoS metric (Total Advertising Cost of Sale) — ad spend divided by total revenue including organic — is the correct way to measure true ad return, precisely because it captures organic uplift.
A practical example: if you spend £200/month on ads, your ad-attributed sales are £300 (67% ACoS, apparently losing money), but your total monthly revenue including organic is £900 — your TACoS is 22%. The ads drove rank, which drove organic discovery, which generated £600 in non-ad sales. Evaluated on ACoS alone, the campaign looks unprofitable. Evaluated on TACoS, it is an outstanding investment. Authors who do not track organic sales alongside ad-attributed sales will mismanage their accounts and under-invest in ads that are actually working.
Monitor TACoS monthly using your KDP dashboard total sales data alongside your Amazon Ads spend data. Calculate it manually: total ad spend ÷ total book revenue (including organic) × 100. Benchmark TACoS for a healthy, growing account typically falls between 8–20% — significantly lower than ACoS because organic sales form an increasingly large portion of total revenue as rank improves over time.
Mistake 13: Running Identical Campaigns Across Formats
Kindle ebooks, paperbacks, and hardcovers have different price points, different royalty rates, different reader discovery behaviour, and different conversion characteristics. Running the same campaign structure with the same bids and keywords across all formats is a structural error. A keyword bid that generates profitable ACoS for a £9.99 paperback may be dramatically overpriced for a £2.99 ebook where the royalty barely covers the click cost.
Create separate campaigns per format. Calculate the breakeven ACoS for each format individually and set bids accordingly. For ebooks, bids generally need to be lower to reflect the lower per-sale royalty. For paperbacks and hardcovers with higher price points, you can afford higher CPCs and still generate profitable ACoS. Product targeting campaigns should also differ — paperback readers browsing physical books behave differently from Kindle readers browsing digital titles.
Additionally, automatic campaigns often surface different search query behaviour by format. Kindle ebook searches skew toward genre discovery queries (“best cozy mysteries 2026”), while paperback searches sometimes include gift or physical format signals (“paperback thriller novel”, “book for men who like military history”). Running format-specific campaigns lets you see this difference in the Search Terms Report and refine accordingly.
Mistake 14: Never Scaling What Works
The opposite of over-spending on poor performers is under-investing in proven winners. Many authors find a handful of keywords that consistently convert at excellent ACoS and then leave bid caps and budgets exactly where they were when they first set the campaign. If a keyword is generating sales at 25% ACoS against a 70% breakeven, leaving that keyword at a low bid is a missed opportunity — increasing the bid would win more impressions and generate more sales at still-profitable economics.
Scaling rules for proven keywords: if a keyword has been converting at ACoS significantly below your breakeven for 30+ days with meaningful volume (10+ clicks per week), test a 15–20% bid increase and monitor for 7–10 days. If ACoS remains below target, raise again. If ACoS rises toward target, hold. Repeat until bids rise to the point where incremental impression gains no longer come at acceptable ACoS — that is your efficient bid ceiling for that keyword.
The same logic applies at the campaign level. A campaign generating strong ROI should have its daily budget increased, not held flat. Budget caps prevent your best campaigns from spending on good opportunities. If a campaign is regularly hitting its daily cap with strong ACoS, it is self-limiting — raise the budget to let it run throughout the full day. Scaling working campaigns is the path from small, profitable advertising to advertising that genuinely moves the needle on total book sales. Use KDP Rank Fuel to track which keywords are delivering and where there is headroom to grow.