Insights
Inside the Judging of the Top 10 Fund Factsheets

We distinctly remember a factsheet we reviewed from a prospective client. They wanted to automate their typical copy/paste Excel/Word production process but insisted their internal design team had already done a ‘good enough’ job on the templates. Naturally, we went to their website to check.

Underwhelmed would be the polite description. Their factsheets fell short on design, content, and practical use for prospective investors. Nowhere near ‘good enough.’ One colleague remarked that these were among the worst they had seen, while another countered that they had encountered even poorer examples. This sparked a lively discussion about the most disappointing factsheets we had come across, which soon evolved into sharing our experiences of the very best ones. We each had strong candidates in mind, though initial agreement proved elusive.

But from this disappointing discovery and the ensuing debate came inspiration. Like a pearl forming around an irritating grain of sand, The Top Fund Factsheets review was born.

What surprised us most

While reviewing the longlist of factsheets, we were struck by how many still contained basic formatting and pagination issues, despite having reached the point of publication.

In most cases, the root cause was obvious. The templates in use simply weren’t flexible or intelligent enough to handle variations in the content they were expected to display.

In 2025, any live factsheet production solution should be built on templates designed to make these issues virtually impossible, except in the most extreme edge cases. Just as importantly, it should be supported by processes and workflows that ensure any problems that do arise are identified and resolved before the factsheet is distributed.

On their own, formatting issues may seem minor when compared with the importance of content completeness and accuracy. In practice, though, they quietly erode confidence, not just in the document, but in the firm and the investment product behind it. Avoiding these issues through smarter template design and robust production processes is therefore not a cosmetic concern, but a fundamental part of delivering a credible and reliable reporting process.

What took the longest to assess

In an exercise like this, success always hinges on the judging criteria and their weightings. Finding the right balance between meaningful and practical was a challenge, and we went through multiple iterations, sacrificing several aspirational “darlings”, to arrive at an initial set of screening criteria that could produce a workable shortlist. The final subjective criteria would then be applied to identify the top 10.

As Moltke kind of put it, “no plan survives first contact with the enemy.” In practice, our initial screening had to be run several times to refine the criteria. Early on, many factsheets ended up with identical scores, offering little differentiation. Only after multiple iterations did we achieve a broader spread of scores, yielding a practical shortlist ready for the final subjective review.

Did we arrive at the perfect judging criteria? No, and we don’t expect to. This is an exercise we plan to repeat regularly, improving and refining the process each time. We also hope to expand the analysis to other markets and reporting outputs, each of which will bring its own unique challenges. Even so, the experience we have gained, and continue to build, positions us to deliver ever more meaningful and insightful analysis in the future.

The judging

After numerous hours, we had compiled a spreadsheet of over 200 factsheets with their various scores and subtotals. These scores determined which factsheets advanced to the final stage: the top 50.

The judges then faced the challenge of narrowing the top 50 down to just 10. Early on, we had debated using a purely mathematical scoring system. Instead, we opted for qualitative scoring layered on top of the initial quantitative metrics. This is where the human factor proved essential.

With over 70 years of combined industry experience in the room, we didn’t initially agree. Some selections and rejections sparked active discussions. What looked like a very long evening in Factbook’s boardroom gradually became more productive. As we worked through the factsheets, we coalesced around 15 that stood out as the best of the bunch. More granular analysis of the data, design, and content through an investor’s lens brought us to the final 10.

What we changed our minds about

1. “Let’s do 2,000 factsheets, how hard can this be?”

Casually scanning a factsheet takes a few minutes. Actively dissecting one can take upwards of 15 minutes, even with a trained eye. Throughout the day, we found ourselves repeating observations like: “That font is different”; “Why is there white space on that page?”; “Where is their phone number?”; “Some figures in the performance table have different decimal places!”

With approximately 200 factsheets, that equated to over 50 hours of work per person (not including finding, downloading, collating, and opening the documents). Our initial idea of scoring 2,000 factsheets suddenly didn’t seem so clever.

2. The weighting and selection criteria

We started with four or five broad categories. As we worked through the first batch, it became clear we needed more granular scoring. Through several iterations, we split the main categories into two or three parts and introduced a final subjective review element.

3. Beauty is not just skin deep

When you review hundreds of factsheets daily, well-designed ones immediately stand out. Surprisingly, the opposite was also true. Some factsheets with frankly terrible designs contained incredible detail, were well written, and were clearly planned with investors in mind.

During selection, these visually unappealing factsheets actually scored highly on non-design criteria. Once we looked deeper, adjustments were made based on content quality. The data wasn’t just present, it was comprehensively explained in plain English (especially true for risks, holdings, and costs).

4. Human judgment matters

One final mechanism we changed was the scoring weighting. We wanted a definitive human judgment element in the overall score. This also helped to arbitrate identical scores where needed.

What we’d like to see change

Alongside world peace, and probably just as likely, it would be wonderful to see firms explain their risks properly. If you’ve made changes to holdings, give reasons. Add percentage change figures. Show there are humans behind the factsheet thinking about investors. Why should an investor have to dig out last month’s factsheet and compare two bar graphs themselves?

To all those firms with one-page factsheets who think that’s sufficient information (including glossary, disclaimer, and compulsory charts): do you think this puts your firm in the best light? A factsheet is probably your most frequent touchpoint with investors. Use that frequency to inform your investor base.

There’s a growing perception of funds as standardised products. All other things being equal, which firm would you invest with as a prospective investor? One with beautifully designed, content-rich, well-explained, strongly branded factsheets? Or one with a couple of A4 pages that look thrown together as an afterthought?

Concluding thought

The sheer manual intensity of this review was a revelation. As automation specialists, we’re already leveraging this experience to develop smarter workflows and automated tools that will handle the heavy lifting in future iterations. After all, solving these exact efficiency challenges is at the core of what we do at Factbook.

Subscribe to Factbook News
Your name
E-mail
Check here if you accept terms