Most product managers obsess over acquisition metrics such as growth rate, new users, conversion funnels, viral coefficients, CAC, and LTV.
At Synacor, working on AT&T's start.att.net portal, none of that mattered.
Why? Because we didn't acquire users. AT&T acquired customers for internet service, and we were the default homepage that came with it. Open your browser, there we were. Surprise!
This created a fascinating measurement challenge: How do you measure success when users don't choose you?
The Default Traffic Problem
Millions of AT&T internet subscribers opened their browsers and saw our portal. Many checked their email on our portal. Some engaged with the content. Some immediately navigated away to Google, Facebook, or wherever they actually wanted to go.
Traditional metrics were useless:
- New user growth? Tied entirely to AT&T's customer acquisition, which we didn't control. Unless someone shared an article with you, we wouldn't be top ten in a search engine as all of our content was sourced from somewhere else.
- Viral metrics? Nobody shares their ISP's default homepage except through article sharing.
- Conversion rates? What were we converting them to? They were already there.
- Referral traffic? Article sharing would be the main avenue.
So what metrics actually mattered?
The Metrics That Told the Real Story
After much experimentation and debate, we focused on metrics that answered one core question: Are we worth visiting, or are we just in the way?
1. Repeat Visits
This was the gold standard. If someone came back to start.att.net deliberately, that meant we provided value, or they were really old, or not very tech-savvy.
We tracked:
- Users who returned within 24 hours
- Users who bookmarked us (vs. just having us as default)
- Direct traffic vs. default-load traffic (we could differentiate based on behavior patterns)
Why it mattered: Repeat visits separated "trapped audience" from "actual audience." If people came back on purpose, we were doing something right. Or they just couldn't figure out how to change their home page in their browser. One of the two.
2. Retention
How many users came back this week? This month? If someone visited us on Day 1, what percentage were still engaging on Day 30?
Unlike a product where users explicitly sign up, we had to define "active" carefully:
- Did they spend more than 10 seconds on the page? (Ruled out accidental loads)
- Did they click on any content? (Showed intent to engage)
- Did they scroll past the fold? (Indicated reading, not just quick bounce)
Why it mattered: Retention told us whether we were providing ongoing value or just getting accidental traffic that immediately bounced. It also helped us gauge if our partners like Taboola were providing interesting content alongside the main media content that we served.
3. Page Depth
How many pages did users visit per session? If someone landed on the homepage and immediately left, that was a bounce. If they clicked through to read articles, watch videos, or explore sections, that showed engagement.
We tracked:
- Pages per session (average across all users)
- Distribution (what percentage viewed 1, 2, 3, 5+ pages)
- Cohort comparison (did power users vs. casual users differ?)
Why it mattered: Page depth was a proxy for "Are we interesting enough to explore?" One-page visits meant we were a speed bump. Multi-page sessions meant we were a destination.
4. Time on Site
How long did users spend with us? This was tricky to measure accurately (browser tabs stay open, people get distracted), but we could get directionally useful data.
We looked at:
- Active time (excluding long periods of inactivity)
- Time distribution (clustering around 30 seconds vs. 5 minutes vs. 20 minutes told different stories)
- Time per article/video consumed
Why it mattered: Time on site separated "glanced and left" from "actually consumed content." Though we learned to be suspicious of really high time-on-site numbers—sometimes that meant confusing navigation or windows left open, not engagement.
5. Time on Each Section
We had different sections: news, entertainment, sports, weather, local content. Tracking time spent in each section told us:
- Which topics resonated with our audience
- Where to invest in content partnerships
- What drove engagement vs. what was ignored
We discovered surprising patterns. Entertainment and celebrity content got clicked frequently but had shallow engagement. Comics got fewer clicks but much deeper reading time. Finance had intense engagement from a subset of users and zero engagement from everyone else.
Why it mattered: This guided content strategy and partnership investments. No point paying for content nobody consumed. And it made sense to spend time and effort on the experience for superfans. For example, creating a stock tracking portfolio made a user surprisingly sticky.
6. Most Popular Articles and Sections
Simple but revealing: what did people actually read?
We tracked:
- Top articles by pageviews
- Top articles by time spent reading
- Top sections by visits
- Trending topics throughout the day
This often contradicted our assumptions. We thought technology news would dominate (it's AT&T, they're tech-savvy, right?). Reality: local weather, traffic, and celebrity gossip crushed everything else.
Why it mattered: We were building for our assumptions, not our actual users. The data corrected us and guided our headline writing and editorial (we had a fantastic group of editors).
7. Overall Visitor Count (But Carefully)
We did track total visitors, but we segmented heavily:
- Default traffic (opened browser) vs. intentional visits
- New vs. returning visitors
- Engaged (spent time/clicked) vs. bounced immediately
Raw visitor count was an important metric, but segmented visitor count showed trends that mattered.
Why it mattered: If engaged visitors increased even while total traffic decreased, that was actually good news—we were becoming more valuable to a smaller, more intentional audience. Overall viewership was in a long term secular decline with AT&T's DSL business, so any way we could buck the trend was valuable.
What We Learned About Each Metric
Retention Was King
If we had to pick one metric to rule them all, retention was it. Users who came back week over week were the validation that we provided genuine value, not just default-traffic momentum. Small improvements in 30-day retention had enormous impact on the business because it meant users were choosing us, not just tolerating us.
Page Depth Told Us About Content Quality
When page depth increased, we knew content was working. Users weren't just accidentally landing on us—they were exploring, clicking through, finding value.
When page depth decreased, something was wrong. Either the content quality dropped, the navigation was confusing, or users were getting what they needed too quickly (which sounds good but meant less ad inventory).
We A/B tested aggressively to optimize for page depth without being manipulative. Including paginated slideshows would increase page depth but also made this a bit of a misleading KPI as it overstated page counts - this was one article sliced into 12 pieces, not 12 articles.
Time on Site Was Noisy But Useful
Time on site was hard to interpret cleanly. Really high numbers sometimes meant confused users stuck on a page. Really low numbers sometimes meant we answered their question efficiently.
But in aggregate, directionally, time on site correlated with satisfaction. When we surveyed users, those with higher average time on site reported higher satisfaction with the portal.
So we tracked it, but we never optimized for it in isolation.
Popular Content Revealed Our Actual Audience
We tried out best not to build for our aspirational audience and instead serve our actual audience. More conservative viewpoints, more pictures.
Section-Level Metrics Drove Budget Decisions
Knowing which sections drove engagement helped us allocate resources:
- Entertainment got high traffic but shallow engagement → cheap syndicated content worked fine
- Local news got moderate traffic but deep engagement → worth investing in quality partnerships
- Sports had passionate engagement from 15% of users → maintain but don't expand
- Technology news had low engagement despite our assumptions → deprioritize
Without section-level metrics, we would have misallocated millions in content partnerships.
The Metrics We Couldn't Measure (But Wished We Could)
Some questions remained frustratingly hard to answer:
"Would users miss us if we disappeared?" We had engagement data but not sentiment data. Did people actively value us, or just tolerate us?
"Are we building habits or just catching defaults?" Even repeat visits could be explained by browser behavior rather than genuine preference.
"What's our actual competitive set?" We competed with "type URL and go somewhere else immediately." Hard to benchmark against that.
"How much of our traffic is basically 'loading screen before real internet'?" Some percentage of our traffic was just the screen that appeared before users went where they actually wanted to go. We tried to measure this but never got clean data.
The Strategic Implications
These metrics shaped our product strategy:
Focus on wider content appeal. We optimized for time spent reading as well as pageviews generated.
Invest in personalization. If we could show users content they actually cared about, repeat visits increased.
Make navigation frictionless. Any confusion killed page depth and time on site. Unfortunately the weight of all the ads did slow the portal down.
Double down on what works. When we found content that drove retention, we invested heavily in more of it.
Measure the right cohorts. "All users" was too broad. "Users who engaged for 30+ seconds on first visit" told a different, more useful story.
Conclusion: Success Looks Different at Scale
When you have millions of users you didn't acquire and that show up on your doorstep, success isn't about growth—it's about value.
The metrics that mattered were the ones that answered: Are we worth their time?
- Retention: Do they come back?
- Page depth: Do they explore?
- Time on site: Do they engage?
- Section popularity: What do they value?
- Repeat visits: Do they choose us?
Have you worked on products with captive or default audiences? What metrics mattered for you? How do you measure success when users don't choose you?