Using data to make complex decisions, fast

Share this on social media:

Simon Epstein and Emma Watkins

Simon Epstein and Emma Watkins describe how a society publisher unlocked author insights

Making good decisions often rests on a combination of creative leadership and actionable data-driven insights. But getting to those insights can be harder than it sounds thanks to a noisy environment of disconnected systems, siloed data centres and legacy technology. Nowhere is this a bigger problem than in the research ecosystem, whether that be in research institutions, funding bodies or academic publishers.

Most companies these days are creating and housing more data than ever before. According to IDC's Global DataSphere Forecast for 2021-2025, the worldwide generation and duplication of data is expected to exhibit a compound annual growth rate (CAGR) of 23%, reaching 181 zettabytes by 2025. This represents a significant increase from the 64.2 zettabytes of data generated in 2020, which itself was 10 times greater than the 6.5 zettabytes produced in 2012.

All this data represents a great opportunity for businesses that can figure out how to harness it to make better decisions. But how should you start? And how can you make this a rapid, repeatable process? One way is to start thinking experimentally.

A Case Study from AIP Publishing

When AIP Publishing approached 67 Bricks to do some technology consultancy in 2022, they knew they had a wealth of data at their fingertips that could be mined for innovation. Importantly, they also had a shopping list of questions about both their users and their internal processes that they were looking to answer. So we helped them to design a series of experiments to do just that, to uncover the insights hiding in all that data.

Experiment one - author relationships

Like many Publishers, AIPP had a lot of data on their published authors, but it was rarely interrogated beyond each individual article process. They wanted to explore the connections between their customers using graph databases, to see whether this could be used to inform future strategy. 67 Bricks were able to upload their data into a sandboxed graph database and from there discover relationships they had not previously seen.

This unlocked insights such as;

  • Authors who frequently collaborated with each other
  • ‘Power collaborators’ which could be automatically identified by PageRank
  • Authors who haven’t collaborated but are interested in the same field, who could perhaps review or collaborate on a Review paper or book.
  • Clusters of popular topics which could inform special issues or new journal launches

The possibilities these sorts of insights could provide for editorial, sales or marketing teams were very exciting and offered a new way of thinking about AIPPs customers.

Experiment two - operational data

AIPP uses a number of ticketing systems to track operational tasks, including JIRA and Freshdesk. With data held in various systems, they had a hunch they were missing patterns and opportunities for improving efficiencies.

Firstly, 67 Bricks enriched their JIRA data using existing information, such as time to resolution, time in QA, and so on. From here we could make inferences about processes which could be improved, and despite gaps in the available metadata, such as the type of request, we were able to identify clusters of similar tickets. This revealed which processes the team were spending the most time on and should therefore be tackled first. AIPP also discovered that their use of these ticketing systems was not as robust as they thought – they identified system improvements that would improve data capture, putting them in a better position in the future to mine for insights.

What did AIPP learn?

Waylon Butler, Director, Data and Analytics at AIPP, reflected that the most useful learning from this experiment work with 67 Bricks was the meta-lesson about how much can be learned in a short period of time. We unlocked hidden insights in data we already had in hand. We discovered improvements in systems and workflows that seemed pretty robust. We learned about the utility of new data mining and analysis techniques. We learned about the power and ease of a variety of technical tools. And, we learned about a different way to approach work and problem-solving generally.’ For anyone considering something similar, he counsels that one does not need a big project that is extensively planned – just get the data, some expertise, and some curiosity together, ask questions, then repeat on a short cadence.’

Maintaining an experimental mindset

These experiments took very little time to set up and make fruitful - a matter of weeks. But to make the most out of this sort of work, you also need a longer-term mindset. While it can be exciting to run short-term projects, attention should be paid to how, if appropriate, any lessons learned will be rolled out to the rest of the business. The advice is simple: ‘Little and often is generally more effective than big projects, run occasionally’. To do this you’ll need to empower your teams to play and think beyond business-as-usual or engage a technology consultancy like 67 Bricks to help model how you can tackle these experiments in addition to delivering your usual work, as part of a wider strategy. 

Top tips for getting started

  • Get used to working with your data - this will ensure that it is accessible and that you are familiar with how you can transform it into the required structure when the opportunity for an experiment arises.
  • If you are evaluating a new tool or technology, move from sample data to your data as soon as you can. You are more likely to be able to ask questions about data that relates to a domain you already understand.
  • There are low code/ no code tools such as graphext.com which allow non-technical users to explore correlations in their data and apply natural language processing (NLP) techniques such as sentiment analysis and classification.  You can get started with a CSV file! Even our experts occasionally use these tools to see if an approach has merit, then build their own pipeline when they need more flexibility.
  • Work across your company to conduct a rough data audit - identify sources of data that may not exist in a shared data repository, and also identify valuable questions that experimentation could answer.
  • Always have a goal; whilst exploring your data in general can uncover insights, these are more obvious when you are working towards a specific goal. You may uncover the supporting evidence you need for a new business case, or find something that challenges your assumptions about your users. That way you will also push your tooling to deliver the results you need, rather than delivering whatever it is able to.
  • Create opportunities for technical staff and SMEs to mix on an equal footing, outside of their usual roles. This could be through hackathons, internal entrepreneurship challenges or simply forums where technology and industry opportunities are discussed openly.
  • Commit and prioritise time for this experimentation work, don’t just squeeze it in around ‘business-as-usual’. While the commitment can be light and iterative, you still need to commit to reap the rewards and learnings.

Simon Epstein is Head of Technical Consultancy at 67 Bricks; Emma Watkins is Head of Marketing and Communications at 67 Bricks.