2026 G2 x SignalScore Benchmark Report

Nobody's Telling You Your Homepage Is Broken. We Will.

We scored 50 B2B SaaS homepages across 8 behavioral psychology dimensions. The average: 61.3 out of 100. The highest: 72. 80% fail on the same dimension. Here's the data.

50
Companies Scored
61.3
Mean Score (of 100)
7
Scored Above 70 Overall
8
Scoring Dimensions

Table of Contents

  1. Methodology: How We Scored
  2. Aggregate Findings: The State of SaaS Messaging
  3. G2 Rank vs. SignalScore: Does Market Position Predict Quality?
  4. Category Breakdowns: Who Wins, Who Loses
  5. Dimension Deep-Dives: All Eight Dimensions
  6. The Buyer Language Gap
  7. Best Homepage: Insider at 72
  8. The Superlatives
  9. Full Leaderboard: All 50 Companies
  10. What to Fix First
Section 01

Methodology

How we scored 50 homepages across 8 dimensions grounded in behavioral psychology research.

We selected 50 B2B SaaS companies across 10 G2 categories, picking the top 3 and bottom 2 performers by G2 ranking in each category. Every homepage was scored on 8 dimensions, each worth 0-100 points. The overall SignalScore is a weighted average.

The 8 dimensions are built on established research, not opinion. Each maps to specific buyer behaviors documented in academic and practitioner literature.

The 5-Second Verdict

Value proposition clarity. Can a visitor articulate what you do within 5 seconds? Based on Nielsen Norman Group eye-tracking research.

The Story Arc

Message hierarchy and flow. Does the page tell a coherent story from headline to CTA? Grounded in narrative persuasion theory.

The Mirror Test

Customer-centricity and JTBD framing. Does the page speak to what the buyer wants to accomplish? Based on Christensen's Jobs-to-Be-Done framework.

The Status Quo Tax

Stakes and cost of inaction. Does the page articulate why doing nothing is the worst option? Grounded in Kahneman's loss aversion (Prospect Theory).

The Safety Net

Risk reduction and buyer confidence. Does the page reduce perceived risk of switching? Based on Cialdini's principles and Gartner CEB buyer research.

The Proof Stack

Credibility and social proof. Are there logos, case studies, stats, and testimonials? Grounded in Cialdini's social proof and authority principles.

The Logo Test

Competitive differentiation. Could you swap in a competitor's logo and the page still makes sense? Based on Ries/Trout positioning theory.

The Close

Conversion architecture. Is the CTA clear, is there a low-friction path, are objections handled? Based on Baymard Institute UX research.

Research Foundations

Every dimension maps to peer-reviewed or widely validated frameworks: Kahneman & Tversky's Prospect Theory (loss aversion), Cialdini's Principles of Persuasion (social proof, reciprocity), Christensen's Jobs-to-Be-Done (customer-centricity), Dixon & Adamson's Challenger methodology (reframing the status quo), Gartner CEB's B2B buyer journey research, and Baymard Institute's UX conversion studies.

Scoring Process

Each homepage was analyzed by evaluating the full-page content against dimension-specific rubrics (0-100 scale per dimension). Scoring prioritized observable evidence: what is on the page, not what we know about the company. A market leader with a poorly written homepage gets scored on its homepage, not its reputation.

Beyond the 8 dimensions, we also captured quantitative page-level metrics: word count above the fold (average: 98.7, median: 45.5), buyer-centric vs. company-centric sentence counts, social proof type and placement, competitive claim presence, buyer persona clarity, and primary CTA text. These data points inform the analysis throughout the report.

All 50 companies received individual GTM teardown analyses (linked throughout this report). Those teardowns provide company-specific findings, recommendations, and the full scoring rationale.

Section 02

Aggregate Findings

The overall picture is not flattering. Most B2B SaaS homepages are mediocre at best, with a universal blind spot that almost nobody addresses.

61.3
Industry Average SignalScore (out of 100)
7
Companies scored above 70 on their overall SignalScore. The ceiling was 72, reached by 6 companies tied at the top.
62.0
Median score, close to the mean. Most companies cluster in the 51-70 range with a standard deviation of 9.2.
3
Companies scored below 50. The floor has risen, but the bottom still drags.
27
Companies scored above 60. Just over half reached what we'd call "competent" messaging.
Score Distribution Across 50 Companies
Industry Average by Dimension
Dimension Averages, Weakest to Strongest
The Universal Weakness: Nobody Talks About the Cost of Doing Nothing

The Status Quo Tax averaged 41.5 out of 100. That is the lowest dimension by a clear margin, 2 points below the next-worst. A full 80% of companies scored below 50 on this dimension. Only 1 scored above 70.

This is the most reliable finding in the entire study: 35 out of 50 companies (70%) had Status Quo Tax as their single weakest dimension. Only 1 company out of 50 (2%) explicitly stated the cost of inaction on their homepage.

Kahneman's research tells us losses hit 2x harder than equivalent gains. Yet nearly every B2B SaaS homepage focuses exclusively on features and benefits while ignoring the question that actually drives buying decisions: "What happens if I keep doing what I'm doing today?"

The Social Proof Vacuum

68% of homepages show no social proof above the fold. Of the 50 companies analyzed, 34 had zero visible social proof (no logos, no testimonials, no metrics) in their initial viewport. Only 5 displayed customer logos prominently. Just 4 featured testimonials.

This is despite The Proof Stack averaging 57.3, putting it in the middle of the pack. Companies tend to bury their best credibility signals below the fold, where most visitors never scroll. Cialdini's research is unambiguous: social proof is most effective when it appears early in the decision process, not as an afterthought near the footer.

The Proof Stack is also one of the most polarized dimensions in the study, with a standard deviation of 14.2. 15 companies scored above 70, the most of any dimension. But 14 scored below 50. Companies either invest in social proof or completely ignore it. There is almost no middle ground.

Nobody Plays Offense on Differentiation

Only 36% of companies make any explicit competitive claim on their homepage. The remaining 64% rely on generic "all-in-one," "AI-powered," or "purpose-built" language that could describe any competitor in their category.

The Logo Test (competitive differentiation) averaged 52.8, making it the third-weakest dimension. 38% of companies scored below 50. Only 3 companies scored above 70.

Ries and Trout's positioning research says differentiation must be specific, ownable, and immediately apparent. Most B2B SaaS homepages fail all three tests.

Where Companies Actually Excel

When we identified each company's single strongest dimension, two stood out:

  • The Proof Stack: 19 companies (38%) had this as their best dimension. When companies invest in social proof, they go all-in.
  • The Close: 15 companies (30%) had this as their best dimension. CTAs, free trials, and demo buttons are well understood.
  • The 5-Second Verdict: 12 companies (24%) led with their value proposition.
  • The Logo Test and The Safety Net combined for just 4 companies (8%). Almost nobody's strongest suit is differentiation.

The pattern is clear: companies invest in the visible, tactical elements (proof, CTAs, headlines) and underinvest in the strategic ones (stakes, differentiation, buyer framing).

The Buyer Persona Problem

We evaluated whether each homepage clearly communicated who the product is for. Of the 43 companies where we could fully evaluate this: 32 (74%) had a clear buyer persona. 8 (19%) were vague about their target buyer. 3 (7%) had no discernible buyer persona at all.

Having a clear persona does not guarantee good messaging. Several companies with crystal-clear buyer personas still scored below 50 overall because they used that clarity to describe product features rather than buyer outcomes. Knowing your audience is table stakes. Speaking to their world, not just at their job title, is what separates the top third from the bottom third.

The Tactical-Strategic Gap

A pattern runs through every finding above: B2B SaaS companies are good at the visible, tactical elements of homepage messaging and bad at the strategic ones. The three strongest dimensions (The 5-Second Verdict at 67.2, The Close at 65.3, The Story Arc at 58.3) are all execution-layer work: headlines, CTAs, free trials, page structure. The three weakest dimensions (The Status Quo Tax at 41.5, The Mirror Test at 43.6, The Logo Test at 52.8) all require strategic positioning decisions: why change, who benefits, why us.

This is not a copywriting problem. It is a strategy problem. You can A/B test CTA button colors all day, but if your page does not make inaction feel costly, does not frame the conversation around the buyer's world, and does not differentiate from alternatives, no amount of tactical optimization will close the gap.

The companies at the top of the leaderboard are not there because they have better design or bigger budgets. They are there because they made strategic messaging decisions before they started writing copy.

Section 03

G2 Rank vs. SignalScore

G2 category leaders do score higher on homepage messaging, but the gap is smaller than you'd expect, and it disappears entirely on one dimension.

63.2
Average SignalScore for G2 Top 3 companies (30 companies total).
56.5
Average SignalScore for G2 Bottom 2 companies (20 companies total).

The gap is +6.7 points in favor of G2 leaders. That is meaningful, but it is not large enough to say that market leaders have figured out homepage messaging. They are simply less bad.

The most interesting patterns emerge when you look at individual dimensions.

G2 Top 3 vs. Bottom 2 by Dimension
Where Leaders Actually Lead
  • The Safety Net (+7.9 pts): Leaders invest more in risk reduction, stacking free trials, security badges, and demos to build buyer confidence.
  • Proof Stack (+7.8 pts): Leaders layer more social proof types, from logos to case studies to analyst endorsements.
  • Logo Test (+7.1 pts): Leaders are better at differentiation. They make claims that competitors cannot copy-paste.
Where Leaders Are Just As Bad
  • Status Quo Tax (+2.9 pts): The smallest gap. Even market leaders barely articulate the cost of inaction. Their average of 42.7 is still failing.
  • Mirror Test (+5.7 pts): Both groups struggle equally with customer-centricity. Even leaders talk about themselves too much.
G2 Leaders with Surprisingly Low SignalScores

Market leadership does not guarantee good messaging. These G2 Top 3 companies scored among the lowest in the entire study:

  • HubSpot Sales Hub - 52/100: A G2 Top 3 CRM that scores 8 points below the study average.
  • HubSpot Marketing Hub - 52/100: The #1 marketing automation tool on G2. Their homepage messaging is below average.
  • AB Tasty - 52/100: A G2 Top 2 A/B testing platform that apparently does not A/B test its own homepage.
  • VWO - 54/100: Another A/B testing leader, scoring below the study average.
Section 04

Category Breakdowns

Product Analytics leads the pack. Marketing Automation and ABM cluster near the bottom. And the companies selling marketing tools? They still have some of the weakest homepages.

Average SignalScore by G2 Category

The Marketing Irony

Companies that sell marketing tools (Marketing Automation, ABM, Landing Page Builders) have worse homepages than companies that don't sell marketing tools.

55.3
Marketing Tool Avg (n=15)
62.7
Non-Marketing Avg (n=35)
Top: Amplitude (72) / Bottom: Countly (58)
Top: Gong (72) / Bottom: CallMiner (58)
Top: ChurnZero (72) / Bottom: Akita (52)
Top: Salesloft (68) / Bottom: Cirrus Insight (58)
Top: Salesforce Sales Cloud (71) / Bottom: HubSpot Sales Hub (52)
Top: Klue (68) / Bottom: Competitors App (51)
Top: Insider (72) / Bottom: Salesforce Pardot (32)
Top: Omniconvert (68) / Bottom: AB Tasty (52)
Top: Unbounce (72) / Bottom: LanderPage (42)
Top: 6sense (68) / Bottom: Foundry ABM (52)
Section 05

Dimension Deep-Dives

A closer look at all eight dimensions, from the industry's biggest blind spot to its standout strengths.

The Status Quo Tax: The Industry's Biggest Blind Spot

Average: 41.5. That is a failing grade by any standard. 80% of companies scored below 50 on this dimension. The best score was 71 (Akita).

Only 1 out of 50 companies (2%) explicitly stated the cost of inaction on their homepage. This is not just a weakness; it is an industry-wide structural failure.

The research is clear: B2B buyers cite "preference for the status quo" as the #1 reason deals die (Gartner/CEB). Dixon and Adamson's Challenger research shows that the most effective salespeople lead with why change is necessary. But almost no one does this on their homepage.

71
Akita scored highest. Followed by Terminus at 62.
15
Salesforce Pardot scored lowest at 15. Their homepage presents no argument for why the status quo is costing the buyer.
The 5-Second Verdict: The One Thing Most Companies Get (Partly) Right

Average: 67.2. This is the strongest dimension overall and the one with the widest range (18 to 82, a 64-point spread). 32 companies scored above 70 here, more than any other dimension.

The best: ChurnZero and Fireflies.ai, both at 82. Their headlines immediately communicate what the product does and who it's for. The worst: Salesforce Pardot at 18. A marketing automation platform whose headline communicates almost nothing about what the product does or who it's for.

Buyer persona clarity was generally decent: of the 43 companies where we could evaluate it, 32 had a clear buyer persona, 8 were vague, and 3 were absent entirely.

The Logo Test: Most Homepages Are Interchangeable

Average: 52.8. Here is the question we asked: "Could you swap in a competitor's logo and the page would still make sense?" For more than a third of companies, the answer was yes.

38% scored below 50 on differentiation. Only 36% of companies made any explicit competitive claim at all. ChurnZero and Amplitude tied for highest at 72, with Outreach close behind at 71.

This connects directly to the "sea of sameness" problem in B2B SaaS. When every company says "powered by AI" and "all-in-one platform," nobody says anything at all.

The Close: Where G2 Leaders Pull Away

Average: 65.3, the second-strongest dimension overall. 16 companies scored above 70 and only 4% scored below 50. Most companies know how to put a CTA button on a page.

But this is also where G2 leaders separate from laggards: +6.9 points. Market leaders have invested in conversion architecture: clear CTAs, free trials, demo scheduling, low-friction entry points. Smaller companies often have a single "Contact Sales" button and call it a day.

Best: ChurnZero and Pendo tied at 78, followed by Insider (77), and Amplitude and Salesforce Sales Cloud (75). Worst: Salesforce Pardot at 45.

The Story Arc: Most Pages Are Feature Lists, Not Narratives

Average: 58.3. This sits in the middle of the pack, 6th out of 8 dimensions. 22% of companies scored below 50. 6 scored above 70.

No company in the study had Story Arc as their single weakest dimension. It is one of the "easier" dimensions to get partly right because any page with basic sections (hero, features, social proof, CTA) gets partial credit for message hierarchy. The problem is that most companies stop there. They build a brochure, not a narrative.

The research is clear on this: narrative persuasion (Green & Brock, 2000) shows that information presented as a story is more persuasive and more memorable than information presented as a list of facts. The best-scoring companies use a problem-agitate-solution flow or a clear "before/after" progression. Salesforce Sales Cloud (72) moves from the buyer's challenge to the platform's approach to proof. Salesforce Pardot (28) dumps disconnected capability statements with no coherent throughline.

72
Salesforce Sales Cloud scored highest. Clear narrative arc from problem to solution to proof.
28
Salesforce Pardot scored lowest. Disconnected feature statements with no coherent story.
The Mirror Test: More Than Half of SaaS Talks About Itself

Average: 43.6. The second-weakest dimension in the study. 74% of companies scored below 50. Only 1 company, Kompyte (Semrush) at 71, scored above 70.

The Mirror Test asks a simple question: is this page about the buyer, or about you? Clayton Christensen's Jobs-to-Be-Done research established that buyers do not care about your features. They care about the progress they want to make. Pages framed around "you" and "your outcome" convert. Pages framed around "we" and "our platform" inform at best, bore at worst.

The most striking example: Salesforce Pardot scored 12, the lowest in the study. A Salesforce product whose homepage reads like an internal spec sheet. Kompyte (71) takes the opposite approach, centering the page on what the user accomplishes: winning more deals. The difference between 12 and 71 is not budget or design quality. It is a framing decision.

71
Kompyte (Semrush) is the only company above 70. Their page frames everything around helping buyers win more deals.
12
Salesforce Pardot scored lowest. A marketing automation platform that talks about itself instead of the buyer.
The Safety Net: The Dimension Most Companies Get Right (Sort Of)

Average: 55.0. Middle of the pack at 4th-strongest. 30% of companies scored below 50. Only 1 company scored above 70 (Pendo at 71).

Most B2B SaaS companies understand, at a basic level, that they need to reduce purchase risk. Free trials, demo buttons, security badges, and customer logos have become standard practice. But the range still spans 49 points (Pendo at 71 to Salesforce Pardot at 22), and the standard deviation of 10.8 means performance varies widely.

Gartner's CEB research shows that 40-60% of B2B deals end in "no decision," not a competitive loss. The buyer simply could not get comfortable enough to commit. The best companies layer multiple confidence signals throughout the page: free trial near the top, customer logos mid-page, security certifications near the footer, and case studies woven into the narrative. Only 1 company (2%) had Safety Net as their weakest dimension, meaning this rarely drags down an overall score. But when it does, the impact is severe.

71
Pendo layers free trial, customer logos, and security signals throughout the page.
22
Salesforce Pardot lacks visible risk-reduction elements. Buyers have no reason to feel confident.
The Proof Stack: The Most Polarized Dimension in the Study

Average: 57.3. The standard deviation of 14.2 is among the highest of any dimension. 15 companies scored above 70, the most of any dimension. Yet 14 companies scored below 50. Companies either invest in social proof or completely ignore it.

Unbounce scored 85 on the Proof Stack, the single highest individual dimension score in the entire study across all 400 data points (50 companies x 8 dimensions). Their homepage layers G2 reviews, customer testimonials with named companies, integration partner logos, and clear trust signals throughout. Insider follows at 81 with a similar multi-layered approach. Salesforce Pardot scored 34: a Salesforce product with minimal visible proof on its homepage despite having access to the entire Salesforce customer base.

Cialdini's research on social proof is among the most replicated findings in behavioral science. In B2B, proof takes specific forms: customer logos (most common but weakest), case studies with specific metrics (strongest), analyst endorsements, and compliance/security certifications. The best companies treat proof as a system, placing different types at different points in the page narrative. The worst treat it as a checkbox, dumping a logo bar above the footer and calling it done.

85
Unbounce scored 85, the highest single dimension score in the entire study.
34
Salesforce Pardot. A Salesforce product with almost no social proof on its homepage.
Notable Weakness Distribution

When we identified each company's single weakest dimension, the results were lopsided:

  • The Status Quo Tax: 35 companies (70%)
  • The Mirror Test: 5 companies (10%)
  • The Logo Test: 5 companies (10%)
  • The Proof Stack: 3 companies (6%)
  • The Safety Net: 1 company (2%)
  • The Close: 1 company (2%)

No company had The 5-Second Verdict or The Story Arc as their weakest dimension. Those two are the easiest to get partly right because they are the most visible.

Section 06

The Buyer Language Gap

We counted every sentence on 44 homepages and classified them as buyer-centric or company-centric. The results explain why most pages feel like product datasheets.

3.8:1
Average buyer-to-company sentence ratio across all homepages
24:1
Unbounce leads with 24 buyer-centric sentences for every 1 company-centric sentence. Combined with a score of 72, they prove that buyer-focused language works when the framing is outcome-oriented.
0.2:1
Woopra has the lowest ratio: 7 buyer-centric sentences vs. 32 company-centric. Their homepage reads like an internal product brief.
What the Numbers Reveal

The average homepage has 17.3 buyer-centric sentences vs. 7.5 company-centric sentences, a 3.8:1 ratio. That sounds decent until you look at the median: 2.5:1. A handful of extreme outliers pull the average up. Half of all companies have a ratio of 2.5:1 or lower, meaning they spend nearly as much time talking about themselves as they do talking about the buyer.

The distribution is telling: 18 companies (41%) have a ratio above 3:1, which represents a buyer-first page. 9 companies (20%) clear 5:1. On the other end, 18 companies (41%) fall below 2:1, and 7 companies (16%) have a ratio below 1:1, meaning they use more company-centric language than buyer-centric language on their own homepage.

Above the fold matters most. In the first two sections of each homepage, we counted an average of 9.1 buyer-language instances vs. 4.3 "we/our" instances. Companies tend to front-load buyer-centric language where the stakes are highest, then revert to product-speak further down the page. The problem is that most visitors never scroll past the first two sections, so everything below the fold is speaking to an audience that has already left.

Why Pronoun Ratios Are Not the Whole Story

Here is the counterintuitive finding: high buyer:company ratios do not reliably predict high Mirror Test scores. Unbounce has a 24:1 ratio and scores 45 on the Mirror Test, not the 80+ you might expect from that ratio. Woopra has a 0.2:1 ratio but scored 48. What gives?

The Mirror Test scores capture something pronoun counting cannot: Jobs-to-Be-Done framing quality. Saying "you" constantly does not help if the framing is still feature-oriented ("you can integrate with 500+ tools" is technically buyer-language but reads as a feature list). Christensen's JTBD framework asks whether the page describes the progress the buyer wants to make, not just whether pronouns are in the right place.

The companies that score well on both the ratio AND the Mirror Test are the ones who use buyer-language to describe buyer outcomes. Kompyte (Mirror Test: 71) frames everything around what the user accomplishes. Salesforce Pardot (Mirror Test: 12, ratio: 4.7:1) proves that raw pronoun counts miss the point. Even with buyer-oriented pronouns, the framing is still about the platform, not the buyer's world.

Buyer-to-Company Sentence Ratio by Company
Section 07

Best Homepage: Insider

With a SignalScore of 72, Insider earned the top spot (tied with 5 others). Even the best homepage in the study leaves 28 points on the table.

72
Insider SignalScore
What Insider Gets Right

Their H1 reads: "Be unstoppable in customer engagement"

Insider leads with buyer-focused language and backs it up with the strongest Proof Stack in the study (81). Their homepage layers named customer stories with quantified outcomes, recognizable logos, and analyst recognition.

They scored 78 on 5-Second Verdict, 77 on The Close, and 68 on Safety Net. The page flows logically from value prop to proof to CTA with consistent messaging throughout.

Insider Dimension Scores
Best vs. Worst: Insider (72) vs. Pardot (32)

The gap between the best and worst homepage is 40 points. Insider outscores Salesforce Pardot on every single dimension. The biggest gaps: Mirror Test (45 vs. 12 = 33 pts), 5-Second Verdict (78 vs. 18 = 60 pts), and Proof Stack (81 vs. 34 = 47 pts). Pardot's homepage reads like an internal spec sheet. Insider's reads like a conversation with a buyer who has a problem to solve.

Section 08

The Superlatives

Five awards for the companies that stood out, for better or worse.

Best Product, Worst Homepage
52/100. The #1 marketing automation tool on G2, with a homepage that scores 8 points below the study average. HubSpot practically invented inbound marketing, yet their own homepage reads like a feature list. The irony is hard to ignore.
Punching Above Their Weight
68/100. A G2 Bottom 1 company in A/B Testing that outscores most G2 leaders in the study. You do not need market dominance to have good messaging.
The Specialist
5-Second Verdict: 75. Status Quo Tax: 35. A spread of 40 points between best and worst dimensions. Swipe Pages nails their opening pitch but completely fails to make inaction feel costly. Fix one thing, gain massive ground.
The Generalist
Standard deviation of just 6.7 across dimensions, with scores ranging only from 54 to 73. Vitally is consistently solid at everything. No glaring weaknesses, no standout spikes. A well-rounded 68/100.
Most Improved Opportunity
If Salesforce Sales Cloud fixed their Status Quo Tax from 42 to the industry average of 57.3, their overall score would jump an estimated 3-4 points. A single section on the cost of inaction could push them from great to top-tier.
Section 09

Full Leaderboard

All 50 companies ranked by SignalScore. Click any company name to read the full teardown.

Rank Company Category SignalScore
1InsiderMarketing Automation72
2ChurnZeroCustomer Success Software72
3GainsightCustomer Success Software72
4GongConversational Intelligence72
5AmplitudeProduct Analytics72
6PendoProduct Analytics72
7UnbounceLanding Page Builders72
8Salesforce Sales CloudCRM Software71
9VitallyCustomer Success Software68
10Fireflies.aiConversational Intelligence68
11LogRocketProduct Analytics68
126senseAccount-Based Marketing68
13SalesloftSales Engagement68
14OutreachSales Engagement68
15OmniconvertA/B Testing Software68
16KlueCompetitive Intelligence68
17CrayonCompetitive Intelligence68
18Kompyte (Semrush)Competitive Intelligence68
19Adobe Marketo EngageMarketing Automation64
20Woopra (Appier AIRIS)Product Analytics64
21ContifyCompetitive Intelligence64
22KeapCRM Software62
23ActiveCampaignMarketing Automation62
24Demandbase OneAccount-Based Marketing62
25Apollo.ioSales Engagement62
26MailshakeSales Engagement62
27Swipe PagesLanding Page Builders62
28InstapageLanding Page Builders62
29monday CRMCRM Software58
30TotangoCustomer Success Software58
31FathomConversational Intelligence58
32Chorus by ZoomInfoConversational Intelligence58
33CallMinerConversational Intelligence58
34CountlyProduct Analytics58
35RollWorksAccount-Based Marketing58
36Terminus (DemandScience)Account-Based Marketing58
37Cirrus InsightSales Engagement58
38KameleoonA/B Testing Software58
39SiteSpectA/B Testing Software58
40Insightly CRMCRM Software54
41VWOA/B Testing Software54
42HubSpot Sales HubCRM Software52
43HubSpot Marketing HubMarketing Automation52
44AkitaCustomer Success Software52
45Foundry ABMAccount-Based Marketing52
46PagewizLanding Page Builders52
47AB TastyA/B Testing Software52
48Competitors AppCompetitive Intelligence51
49LanderPageLanding Page Builders42
50Salesforce PardotMarketing Automation32
Section 10

What to Fix First

If you're a B2B SaaS company reading this, here are the three highest-impact changes you can make to your homepage messaging today.

Want to save or share this report?

Where Does Your Homepage Stand?

SignalScore evaluates your homepage across the same 8 dimensions used in this study. Get your score in minutes, not weeks.

Free scorecard delivered via email. Full diagnosis with findings, citations, and prioritized fixes available for $299 after you see your scores.