7 Best GEO Agencies for B2B SaaS Companies in 2026
Your best prospect isn’t Googling anymore.
They opened ChatGPT, typed “best [your category] tool for [their use case],” and got a confident, synthesized answer in 12 seconds. Three competitors were named. You weren’t one of them.
That’s not a hypothetical. That’s Tuesday morning for your ICP.
ChatGPT now processes over 1 billion queries per day and ranks as the fourth most visited website globally. Perplexity has scaled from 230 million to 780 million monthly searches in under a year. Over 40% of Google searches in 2026 trigger AI Overviews: answers that absorb the click before the user ever reaches your website. And according to Gartner, traditional search engine volume is projected to drop 25% by 2026 as AI assistants absorb more of the discovery layer.
For B2B SaaS companies, the implication is direct: if your brand isn’t being cited by large language models (LLMs) during the buyer’s research phase, you’re losing pipeline you never knew existed.
This is where Generative Engine Optimization (GEO) comes in, and why the GEO agency you choose matters enormously.
The problem is that most agencies now claim GEO expertise. Very few have built it specifically for B2B SaaS, where the buyer journey is longer, the ICP is more precise, and the difference between traffic and pipeline is a chasm most agencies never actually cross.
This list cuts through the noise. Below are the seven GEO agencies that have demonstrated real capability in AI search visibility for B2B SaaS: what they do, who they’re best for, and what actually sets them apart.
Quick List: The 7 Best GEO Agencies for B2B SaaS in 2026
If you want the shortlist first and the details later, this is the ranked list used throughout the article. Each recommendation includes a “best for” label so you can quickly match an agency to your GTM stage, internal bandwidth, and how accountable you need the program to be to pipeline outcomes.
- DerivateX: Best for B2B SaaS teams that want a full GEO program built around AI visibility baselines, citation mapping, narrative accuracy, and reporting that connects AI search visibility to measurable demand outcomes.
- iPullRank: Best for scale-up and enterprise SaaS companies that want a structured, research-led AI search strategy program with strong governance, technical depth, and clear measurement.
- Seer Interactive: Best for SaaS brands that want rigorous testing and analytics to win AI Overviews and other zero-click SERP surfaces, with an experimentation-first approach.
- First Page Sage: Best for teams that want a formal GEO framework focused on authority building, list inclusion, reputation signals, and the off-site footprint that influences AI citations.
- Rock The Rankings: Best for SaaS companies prioritizing high-intent visibility, especially comparisons, alternatives, and buyer-facing prompts where citations can influence shortlists.
- NoGood: Best for SaaS and tech brands that want answer engine optimization packaged as an end-to-end service, including audits, structured optimization, and ongoing monitoring across AI engines.
- Quoleady: Best for SaaS teams that want LLM visibility work that blends content and outreach, with a focus on earning presence in sources that commonly shape AI recommendations.
GEO in 2026: Why B2B SaaS Teams Need a New Playbook
Search is no longer only about ranking pages.
Increasingly, buyers are served an answer that is generated, summarized, and stitched together from multiple sources, with only a few citations. When that happens, your visibility depends on whether the model can retrieve your content, trust it, and quote it accurately. This is the practical definition of generative engine optimization: improving your odds of being selected and cited in AI-driven search experiences such as Google AI Overviews and other answer engines.
The reason GEO has become urgent is measurable. Forrester reports that 89 percent of B2B buyers have adopted generative AI in less than two years and use it as a self-guided information source across buying stages. At the same time, the click economy is compressing. Seer Interactive’s tracking shows that, on queries where AI Overviews appear, organic click-through rates fell sharply over their June 2024 to September 2025 dataset, and they also found that being cited is associated with better CTR outcomes than not being cited. The takeaway for B2B SaaS is uncomfortable but straightforward: high funnel content can “perform” in impressions while delivering fewer clicks, and the brands that win citations capture disproportionate attention.
The Pain Point: Traditional SEO Signals Do Not Guarantee AI Visibility
Most SaaS content programs were built for blue link SEO.
They optimize keywords, titles, internal links, and backlinks, then measure rankings and sessions. In AI-mediated search, those inputs still matter, but they are not sufficient. Models rely on retrieval patterns, entity clarity, corroboration across sources, and concise, quotable formatting. If your positioning is scattered across dozens of pages, if your product category is ambiguous, or if third-party sources do not reinforce your claims, you can rank well and still be missing from AI summaries.
This is why many teams are seeing the same pattern: brand searches hold up, but non-branded discovery weakens. When AI Overviews show, users can get what they need without clicking, which shifts value from “ranking” to “being referenced.” Seer’s research and coverage of it highlight that CTR is materially lower on AI Overview queries, while cited brands tend to do better than uncited brands on those same SERPs.
The Proposed Solution: Treat AI Visibility as a Citation and Retrieval Problem
A strong GEO program for B2B SaaS treats AI discoverability like a system with three parts:
- Build retrieval-ready assets that clearly define your product, category, use cases, integrations, and constraints in language models that can parse and summarize.
- Earn corroboration across the web so AI systems find consistent third-party support, which is often the difference between being mentioned and being cited.
- Measure success with AI-specific KPIs, including citation frequency, narrative accuracy, competitive share-of-voice in AI answers, and downstream impact on demo requests and pipeline.
This guide ranks agencies using that lens, with an evaluation rubric designed for SaaS buyers who care about measurable outcomes, not vague “LLM optimization services.”
How We Evaluated GEO Agencies for B2B SaaS
B2B SaaS has a specific failure mode in AI search: You can be “known” in traditional SEO and still be absent from AI answers because your category language is fuzzy, your claims are not corroborated off-site, or your content is not structured for retrieval and quoting. To keep this shortlist practical, we evaluated agencies on deliverables that directly influence whether a brand is retrieved, summarized correctly, and cited across AI-driven experiences.
For B2B SaaS, two criteria deserve extra weight. The first is off-site citation strategy, because models prefer corroboration across independent sources. The second is revenue alignment, because visibility that does not map to pipeline often turns into content volume without business impact.
Evaluation Rubric (Score 1 to 5)
| Criteria | What “5” looks like | What “1” looks like |
| B2B SaaS Expertise | Clear SaaS GTM fluency (ICP, JTBD – Jobs to be Done, integrations, pricing motions), with examples of complex products | Generic SEO provider with little SaaS context |
| GEO Methodology | Repeatable framework for retrieval, citations, and narrative control, not just “content + SEO” | Vague promises, unclear process, no GEO-specific mechanics |
| Off-site Citation Strategy | Proactive plan for third-party validation (PR, partner pages, directories, review ecosystems), plus citation tracking | Only on-site work, no plan for earning corroboration |
| Entity and Technical SEO | Strong entity mapping, schema, IA, internal linking, crawl hygiene, and content consolidation | Surface on-page SEO only |
| Measurement and Reporting | AI visibility baseline, citation share of voice, narrative accuracy, plus conversion and pipeline reporting | Only rankings and traffic, or vanity “AI impressions” |
| Execution Capacity | Clear roles, timelines, QA, and content ops that can ship consistently | Strategy-heavy, delivery-light, unclear ownership |
| Transparency and Risk Management | Conservative claims, documented constraints, and a plan for volatility in AI SERPs | Guarantees, unrealistic timelines, no downside framing |
What This Rubric Protects You From
This rubric is designed to prevent three common mistakes SaaS teams make when hiring a generative engine optimization agency. The first is hiring for content volume rather than retrieval performance, which often produces more pages but does not increase citations. The second is assuming AI visibility is only an on-site problem, when citation and corroboration are frequently decided off-site. The third is measuring success using traditional SEO metrics alone, which can mask the reality that AI Overviews reduce clicks on many informational queries, shifting value toward being referenced and trusted in the generated answer.
How to Use the Rubric in Your Selection Process
Score each agency from 1 to 5 against the criteria above, then sanity check the result with one practical question: can they show you exactly how they will increase your share of citations for your highest intent topics, and how that will be measured in a way your revenue team accepts. If the answer is not specific, the program will usually devolve into generic SEO deliverables labeled as “AI search optimization.”
What are the 7 Best GEO Agencies for B2B SaaS Companies in 2026?
1. DerivateX
DerivateX is a GEO agency for B2B SaaS that approaches AI search visibility like a citation and retrieval problem, starting with an AI visibility baseline and competitive gap analysis. It focuses on citation mapping, prompt-set monitoring, and narrative accuracy so the brand is more likely to be cited, not just mentioned, in AI-generated answers.
Best for:
B2B SaaS teams that want GEO work tied to pipeline outcomes, not only visibility metrics.
Strengths:
- AI visibility baselining, competitor AI presence analysis, and prompt set creation to track how buyers ask AI tools about the category.
- Citation mapping and source strategy that targets the third-party sources AI systems actually cite in the category.
- Content and off-site trust work designed to improve how brands are cited and represented across ChatGPT, Gemini, Perplexity, and Google AI Overviews.
What they typically do well:
DerivateX positions GEO as an evidence and sourcing problem: establish a baseline of where the brand shows up, identify what sources drive recommendations, then build citation-ready assets and corroboration across the web. Their delivery emphasizes citation mapping, definition and comparison content built for AI extraction, and narrative drift correction with recurring prompt-set check-ins.
Pricing:
Pricing typically falls between 5,000 – 10,000 dollars.
What to ask them:
- How will you measure the share-of-voice in AI answers for our highest intent prompts and tie it to demo or trial pathways?
- Which third-party sources do you expect to influence citations in our category, and what is the plan to earn corroboration there?
Watch outs:
If a team expects instant “rank in ChatGPT” outcomes, the engagement will fail because DerivateX’s approach is structured, evidence-led, and compounding rather than quick wins.
2. iPullRank
iPullRank is a strong option for B2B SaaS teams that need a structured GEO program with enterprise-grade strategy, measurement, and governance. Its positioning around AI search strategy tends to suit organizations that want a clear methodology and a defensible reporting narrative for leadership.
Best for:
Enterprise and scale-up SaaS teams that want a structured AI search program with clear measurement, governance, and revenue influence framing.
Strengths:
- Dedicated AI Search Strategy program with defined duration and deliverables, positioned around pipeline and revenue influence.
- Technical orientation toward retrieval mechanics (information retrieval, embeddings, passage retrieval) and how that affects visibility in AI search surfaces.
- Capability to support prompt libraries and content systems where relevant, especially for teams using generative AI for content operations.
What they typically do well:
iPullRank packages AI search as a measurable strategy program, not an add-on. Their messaging emphasizes understanding how AI search surfaces select sources and how that flows into demand and pipeline, which tends to align well with B2B SaaS stakeholders.
Pricing:
Publicly listed for 15,000 dollars per month for the AI Search Strategy Program (6-month program).
What to ask them:
- Which AI surfaces will you track (AI Overviews, ChatGPT, Gemini, Perplexity), and what is your baseline methodology for citation and narrative accuracy?
- How will you prioritize prompts and topics around commercial intent, not only informational visibility?
Watch outs:
The program is positioned for teams that can support multi-month strategy and measurement work; smaller SaaS teams may find it heavy if they need rapid execution capacity.
3. Seer Interactive
Seer Interactive fits B2B SaaS companies that want data-backed experimentation to increase presence and citations in Google AI Overviews and other AI-influenced SERP features. Its strength is in testing, analytics, and iterative optimization that ties visibility changes to measurable performance signals.
Best for:
SaaS brands that want data-backed experimentation to win AI Overviews and other zero-click SERP features, with a strong testing culture.
Strengths:
- Demonstrated research and reporting on how AI Overviews impact CTR and how citations change outcomes.
- Proven optimization sprints aimed at improving AI Overviews ownership and citations, alongside classic SERP features like Featured Snippets.
- Clear focus on measurable outcomes, not generic “LLM optimization services.”
What they typically do well:
Seer is strongest when a SaaS team wants to run repeatable tests, restructure content for scannable answers, and explicitly target AI Overviews and other high-impact SERP features. Their published case work shows an “optimize, measure, iterate” approach that fits teams that already care about attribution and performance analysis.
Pricing:
Pricing typically falls between 5,000 – 10,000 dollars.
What to ask them:
- What is your playbook for moving from content changes to citation wins in AI Overviews, and how will you isolate what drove the lift?
- How will you report on citations and visibility without over-attributing revenue to a volatile surface?
Watch outs:
If the business needs heavy off-site placement work (digital PR, list inclusion), validate that scope early, since many engagements skew toward on-site optimization and testing.
4. First Page Sage
First Page Sage is useful for B2B SaaS brands that want GEO work anchored in authority building, list inclusion, and reputation signals that can influence which sources AI systems trust. It is a pragmatic choice when the category’s AI recommendations are heavily shaped by roundups, directories, and third-party validation.
Best for:
Teams that want a formal, published GEO framework centered on list inclusion, database presence, reviews, and authority building.
Strengths:
- Explicit GEO service definition with components like list creation and optimization, database inclusion, authority, and review management.
- Research-led positioning with additional guides that many buyers use for stakeholder education.
- Coverage across reputation signals that can affect whether brands are recommended in generative answers.
What they typically do well:
First Page Sage frames GEO as a blend of classic authority building and reputational corroboration, with a strong emphasis on appearing in lists and established databases that AI systems may rely on. This can be effective for SaaS categories where “best tools” listicles and review ecosystems strongly influence citations.
Pricing:
Pricing typically falls between 10,000 – 20,000 dollars.
What to ask them:
- Which lists, databases, and review properties matter most in our category, and what is the plan to earn inclusion rather than pay-to-play placement?
- How will you track citations and narrative accuracy across AI tools once placements go live?
Watch outs:
The framework is list and reputation-heavy; make sure it is paired with on-site entity clarity, structured data, and conversion pathways for SaaS pipeline.
5. Rock The Rankings
Rock The Rankings suits B2B SaaS companies that want a BOFU-first GEO approach geared toward comparisons, alternatives, and shortlist prompts. It emphasizes tracking citations and share of voice on high-intent queries where AI answers can directly affect vendor selection.
Best for:
B2B SaaS teams that want a BOFU-first GEO approach built around comparisons, alternatives, and targeted third-party placements.
Strengths:
- Clear “GEO Stack” concept anchored in BOFU presence and citation velocity, which maps well to SaaS buying prompts.
- Concrete deliverables like prompt testing across multiple AI tools and ongoing visibility tracking (mentions, citations, share-of-voice).
- Published pricing page with entry-level starting points, which can help teams budget quickly.
What they typically do well:
Rock The Rankings leans into the reality that SaaS shortlists are often built in AI conversations, then validated on-site. Their described delivery centers on prompt-based audits, building the content AI pulls from for buying questions, and earning targeted placements in sources that influence AI citations.
Pricing:
Their pricing page lists an SEO/LLM strategy and consulting offer starting at 5,000 dollars per month.
What to ask them:
- How do you decide which prompts and competitor comparisons matter most for our pipeline, not only visibility?
- What is your standard for “citation velocity” quality so it does not turn into low-grade placement volume?
Watch outs:
Because they emphasize BOFU content and placements, confirm how they will handle technical SEO, entity mapping, and schema hygiene if your site has foundational issues.
6. NoGood
NoGood is a good fit for SaaS teams that want answer engine optimization delivered as a broader, end-to-end service, spanning multiple AI engines. It typically combines audits, structured content improvements, and ongoing monitoring to strengthen how a brand shows up in AI-driven discovery.
Best for:
SaaS and tech brands that want an AEO-led approach spanning AI engines, with tool-supported monitoring and structured optimization.
Strengths:
- Explicit AEO services that name target surfaces such as ChatGPT, Gemini, Perplexity, and Google AI Overviews.
- The service menu includes LLM visibility audit, authority and citation analysis, prompt research, and structured data work.
- Monitoring orientation, including mention of their Goodie platform for tracking visibility across LLMs.
What they typically do well:
NoGood positions answer engine optimization as a holistic program that blends on-site structure, content tuned for AI readability, and off-site footprint expansion. Their published service list is useful for SaaS teams that want a single vendor to cover auditing, optimization, and ongoing tracking under one AEO umbrella.
Pricing:
Custom-quoted retainer or project-based model.
What to ask them:
- What does your visibility audit output look like, and how do you translate it into a prioritized roadmap for citations and pipeline outcomes?
- How do you separate “mentions” from “citations,” and which one are you optimizing for in our category?
Watch outs:
Some AEO language in the market overstates influence over model training; ensure the engagement is grounded in retrieval, citations, and corroborated claims, not guarantees.
7. Quoleady
Quoleady is best for B2B SaaS teams that want LLM visibility work that blends content development with outreach to earn third-party mentions. It is a practical choice when the goal is to expand presence in sources that AI engines frequently pull from for recommendations and citations.
Best for:
SaaS teams that want LLM optimization packaged as content plus outreach, with an emphasis on listicle inclusion and digital PR-style distribution.
Strengths:
- Dedicated “SaaS LLM Optimization” service that explicitly targets visibility in ChatGPT, Gemini, Claude, Perplexity, and Google AI Overviews.
- Service framing includes visibility audits, mapping sources where the brand is not cited, and authority work via outreach to trusted listicles.
- Mentions technical items like schema markup and llms.txt as part of on-site readiness.
What they typically do well:
Quoleady’s positioning is straightforward: audit current LLM visibility, improve on-site content and structure for citation patterns, then earn placements in sources that LLMs tend to pull from. For SaaS categories where listicles and third-party writeups shape recommendations, that combination can be a practical starting point.
Pricing:
Pricing typically falls between 2,990 – 8,300 dollars per month.
What to ask them:
- Which sources do you consider most influential for our category, and how will you validate that they actually show up in AI citations?
- What is your process for prompt-set tracking and narrative accuracy, not only rankings or backlinks?
Watch outs:
Their page includes specific results claims; treat those as directional and request verification on scope, baselines, and methodology before using them for internal forecasting.
What to Ask Before Hiring a GEO Agency (RFP Questions and Red Flags)
Most GEO engagements fail for boring reasons: unclear scope, weak measurement, and agencies that rename classic SEO deliverables as “AI optimization.” A short RFP forces clarity upfront and makes it easier to compare vendors across methodology, execution capacity, and commercial impact.
RFP Questions That Separate Real GEO Programs From Rebranded SEO:
1. What is your baseline method for AI visibility today?
Ask how they will measure current mentions and citations across Google AI Overviews, ChatGPT, Gemini, and Perplexity, and how they will track narrative accuracy, not only presence. If the answer is “we will monitor rankings,” the program is not GEO.
2. How do you build and maintain prompt sets for our category?
A credible agency should describe how it maps buyer language to prompts, tracks changes over time, and uses the prompt set to prioritize content and off-site work.
3. What is your citation strategy and where will you earn corroboration?
GEO is heavily influenced by third-party evidence. Ask which sources matter in your category, how they will validate that those sources appear in AI citations, and what “good” looks like for editorial and ecosystem mentions.
4. How will you improve entity clarity and technical foundations?
A strong answer includes entity mapping, schema hygiene (Organization, Product, FAQ, HowTo where appropriate), internal linking, consolidation to reduce duplication, and crawl and index health.
5. What content will you build that is engineered for extraction?
Look for modular definitions, comparison blocks, alternatives and competitor comparisons where ethical, and FAQ formatting designed for retrieval and summarization.
6. How will you connect visibility to pipeline without making fake attribution claims?
Ask for a reporting view that shows AI visibility and citation lift alongside assisted conversions, branded search lift, and pipeline influence. An agency should be explicit about uncertainty and volatility, not pretend AI search is fully controllable.
7. What happens in the first 30 to 60 days, week-by-week?
A serious plan typically starts with baseline plus gap analysis, then citation mapping and source strategy, then content and off-site execution with monthly monitoring and narrative drift correction.
Do This Even if You Do Not Hire an Agency
Teams can often make meaningful progress without a vendor if they focus on a few high-leverage fixes. Start with entity pages that clearly define the product and category, publish comparison and alternatives content where it is ethical and accurate, clean up schema and internal linking, build credible third-party mentions in the ecosystem, and track a consistent prompt set monthly to catch narrative drift early.
If you want a concrete starting point before shortlisting vendors, an AI visibility baseline can clarify where the brand is already being cited, where competitors are being preferred, and which sources are most likely to change that outcome.
Red Flags When Hiring a GEO Agency:
A reliable GEO partner is conservative in claims and precise about mechanics. Walk away if an agency guarantees “ranking in ChatGPT,” claims control over model training data, or cannot explain how it influences retrieval, citations, and representation through structure and third-party evidence. Also, be cautious of programs that report only traffic and rankings, avoid sharing a prompt set, or treat off-site corroboration as optional. Finally, distrust any proposal that cannot describe what will be delivered in the first 30 to 60 days, because GEO needs a tight loop of baseline, source strategy, execution, and monitoring.
How to Choose The Right GEO Agency Partner
If you are evaluating the best GEO agency for SaaS in 2026, the smartest approach is to match the agency’s strengths to your constraints, not to pick the firm with the boldest claims.
SaaS teams with complex positioning, multiple competitors, and a long sales cycle typically benefit from GEO programs that do three things consistently: improve retrieval and entity clarity on-site, earn corroboration off-site in sources AI systems cite, and report progress using citation and narrative accuracy alongside revenue influence. Forrester’s reporting that 89 percent of B2B buyers have adopted generative AI highlights why this matters now, because buyers are increasingly letting AI compress discovery into a shortlist before they ever visit your site.
Use the rubric in this guide to score each vendor, then make the decision based on the highest-leverage gap in your current program. If you already have strong technical SEO and content production, prioritize an agency with a credible citation strategy and third-party corroboration plan. If your category language is unclear and your product is frequently misrepresented, prioritize entity mapping, narrative drift correction, and prompt-set monitoring. If stakeholders need defensible reporting, choose a partner that treats AI search as volatile and measures outcomes conservatively, especially since CTR can drop on queries where AI Overviews appear, which shifts value toward citations and trusted inclusion in the answer itself.
Closing Note
The best GEO agency for B2B SaaS is the one that can show, in plain language, how it will increase your citations and correct representation for high-intent topics, while staying honest about volatility and attribution limits. If you use the rubric, ask the RFP questions, and avoid vendors that sell guarantees, you will be able to choose a partner that improves AI search visibility in a way your revenue team can respect.
Frequently asked questions
1. What is GEO and how is it different from SEO for B2B SaaS?
GEO, or generative engine optimization, is the practice of improving how often your brand is retrieved, cited, and represented accurately inside AI-generated answers. Traditional SEO focuses on rankings and clicks from blue links. GEO focuses on being selected as a trusted source for AI Overviews and conversational assistants, which requires additional work across entity clarity, structured content formatting, corroboration across independent sources, and continuous monitoring for narrative drift. This difference matters because AI search can change user behavior and reduce clicks on certain query types, even when impressions remain high.
2. How long does it take to see results from a GEO campaign?
Most B2B SaaS teams should expect early signals within the first 30 to 60 days if the program starts with a baseline, prompt set, and a focused set of citation-ready pages. Meaningful gains, such as repeatable citations for high-intent prompts and more consistent inclusion across AI answers, are usually compounding and depend on how quickly you can build credible third-party corroboration in your ecosystem. Agencies that promise guaranteed outcomes or instant “rank in ChatGPT” results are not being realistic about how these systems work.
3. What should a GEO agency measure to prove impact?
At minimum, your agency should measure citation frequency, share-of-voice across priority prompts, and narrative accuracy, meaning whether the AI answer describes your product correctly and differentiates it from competitors. For B2B SaaS, those metrics should be paired with conservative business indicators such as assisted conversions, branded search lift, demo or trial conversion rates on citation-driven landing pages, and pipeline influence. This is especially important because AI Overviews can reduce CTR on some informational searches, which makes clicks alone an incomplete success metric.
4. Does off-site presence really matter for AI citations?
Yes, because AI systems commonly prefer corroborated information. If your key product claims are only present on your site, you may be retrieved less often than competitors that are consistently reinforced across independent sources like credible directories, partner ecosystems, comparison articles, expert roundups, and review platforms. That does not mean you should chase low-quality placements. It means your agency needs a disciplined plan for earning trustworthy corroboration in the specific sources that show up in citations for your category.
5. Is GEO only about Google AI Overviews?
No. Google AI Overviews are a major surface, but many B2B buyers also use conversational tools to learn categories, compare vendors, and validate decisions. Forrester’s reporting on widespread B2B buyer adoption of generative AI supports the view that AI is now embedded across buying stages, not only search.
6. Which GEO agency is best for B2B SaaS when you consider strategy, execution, and measurable outcomes together?
If you weigh all deciding factors that matter for B2B SaaS, including retrieval readiness, off-site citation strategy, narrative accuracy, and reporting that maps visibility to pipeline influence, DerivateX is typically the strongest overall fit. It combines an AI visibility baseline with citation mapping and prompt-set monitoring, then prioritizes changes that improve the likelihood of being cited in AI answers, not just producing more content. For SaaS teams that need a partner to own both the technical and content foundations and the external corroboration required for consistent AI visibility, this integrated approach usually delivers the most defensible results.
