How to Continuously Improve your School’s Website with Evolutionary Redesign
Date posted: November 23, 2016
Higher education websites are both vitally important to a school’s success and very easy to get wrong. With so many academic departments and inner colleges to represent, diverse audiences to cater to and objectives to accomplish, making a website that satisfies all types of users on all types of devices is a tall challenge indeed. In addition to these essential requirements, universities are eager to establish a distinctive brand that conveys that they’re progressive, inclusive and cool (among many other attributes).
When institution leaders eventually decide that their existing sites aren’t doing the job, the sheer size and complexity of higher ed websites make redesigns a huge undertaking. They have significant up-front costs and implementation times, frequently going over budget and taking longer than expected. The worst part is that when the expensive new site is finally unveiled, colleges risk finding that their traffic has dropped precipitously or that they’re facing a backlash from irate users.
Although not strictly website related, attempted logo rebrands from the University of California and Trinity College were recently brought back to the drawing board after passionate rejections from their respective communities. True website success is measured by more subtle means – analytics data will reveal detailed page views, conversions and bounce rates and it can take weeks or months to objectively assess whether a website is performing as desired.
There are many reasons why traffic may drop following a redesign – because SEO was not effectively implemented into the redesign process, the new site didn’t undergo sufficient internal testing, users are confused by the new navigation or the overall design simply doesn’t satisfy user needs.
Revolutionary vs. Evolutionary Redesigns
Traditional website redesigns feature a revolutionary “out with the old, in with the new” approach, often focused on implementing trendy aesthetics based on subjective assumptions. University leaders and creative agencies tend to underestimate the risk of revolutionary redesigns, gambling that adopting a beautiful look and incorporating established “best practices” will automatically achieve expected gains.
The big problem is that these sweeping changes risk alienating your community by typically ignoring or minimizing user feedback and analytics, and are relatively irrevocable once the new design is live. Even if new pages are increasing conversions, it’s difficult to identify which of the simultaneous changes are responsible for these improvements.
That’s why more and more organizations are favouring evolutionary redesigns, minimizing these kinds of risks by making structured, incremental improvements that are continuously tested and optimized. Time and resource commitment is then spread out over time because the website is always evolving.
While radical redesigns can be necessary to replace long outdated websites, a more ideal approach is to continuously improve the user experience by applying A/B testing to evaluate each successive design update. For instance, Princeton University has long been working on a major redesign that they intend to launch this fall after having the same basic design for eight years:
“Since 2008, our site has had the same basic design,” writes Dan Day about the project. “We won’t wait another eight years to redesign. Once we put up the new version, we’ll start work on improving it and do so continuously week after week, month after month, year after year.”
Continuous improvement is part of their new vision, in recognition that the site’s users are constantly changing and even the expectations of return visitors continue to evolve with the times. Universities are far from static so why should their websites be? Princeton lists dozens of meetings their design team has had with campus groups, noting that these interactions won’t stop after the site launches because “continuous improvement means continuous collaboration.”
A Better Approach to Website Redesign: Always Evolving
It doesn’t make sense to focus on your most important communications and recruitment platform only once every five years. Instead of launching and leaving, more organizations are opting to keep developing, testing and refining their websites. Higher education sites seem particularly well suited to this approach, limiting frustrations from diverse stakeholders by rolling out a series of small improvements while retaining overall stability, letting users become accustomed to new aspects one at a time.
It’s a strategy utilized by the biggest internet sites – Google, Amazon, eBay, etc. – quietly forgoing big redesigns in favour of continual changes that can be tested and implemented so fluidly that users rarely notice. eBay actually once tried switching their formerly bright yellow background to white in a sudden change but the influx of complaints forced them to change it back. Instead of conceding defeat, they gradually modified the background colour, shade by shade over several months until the yellow was finally gone. Hardly anyone noticed this time.
When planning any redesign, it’s best to focus on goals you wish to achieve with your website rather than everything that’s perceived wrong with the existing design. Evaluate the content of your most visited pages (using Google Analytics), before considering a deeper content audit of a few dozen others deemed important for your marketing efforts. At this point, the goal is just to determine whether your content reflects what’s best about your school.
Is your content doing what it’s supposed to – effectively informing and inspiring targeted traffic, integrating optimal keywords, and driving conversions? Is it true to your school? It is appealingly designed for readability with a good font size, paragraphs, sub-headings, images and videos? Strive to thoroughly understand what users expect from your site and their navigation path to fulfilling these goals. When considering changes to your higher education web development, keep in mind that users tend to care less about aesthetic design elements than being able to easily complete their tasks.
Here are three key SEO considerations for changing content:
- Integrate 301 redirects to pass search engine rankings and users from the old page to a similar new page
- Be cautious about changing internal links in your site’s architecture
- Understand which keywords and topics have the best ROI and traffic potential, so you maintain priority keywords in title tag, header tags, body copy and alt text, and don’t remove pages targeting those keywords
The A/B’s of Testing Everything
Regular A/B testing is at the heart of evolutionary design, measuring two or more options in a controlled environment according to the behaviour of real users to reveal the will of the people. For example, if changing the colour of a Call-to-Action button yields 20% more clicks then the data has spoken. Absolutely everything in your digital marketing can be tested for best results but to realize the greatest redesign gains, focus first on testing elements that will most impact user experience and conversions. These include new initiatives or pages that have yet to be optimized, high traffic or strategically important pages, and poorly performing pages with low conversions and high bounce rates.
Some organizations start with qualitative usability testing, surveying users for feedback or observing a test group’s interaction with their websites. This strategy calls for asking users to perform specific tasks, encouraging them to talk aloud as they navigate your site, and asking general preference questions. Qualitative analysis is a great way to generate ideas for testing. It may be done with various redesign possibilities to help reveal the optimal version as well as identify potential design flaws that were previously overlooked.
However, experimenting with design variations through A/B or multivariate testing is a more representative data-driven approach to confirm hypotheses and reduce redesign risks. This optimization strategy involves splitting your web traffic so each visitor sees either your current version (the control group) or one or more variations. Specific performance metrics are tracked to determine which version delivers higher conversion rates.
Multivariate testing compares combinations of varied attributes to measure how two or more dimensions of a design interact with each other to determine a design’s success. This type of testing generally takes longer to retrieve statistically significant results and is more complex to fully comprehend.
A/B design variations may differ in only one element (e.g. changing an image for a video or changing the colour of a button) or in many elements at once (e.g. changing an entire page layout). Switching seemingly obscure elements can sometimes create surprisingly significant differences that generate more student inquiries, providing data-driven evidence to justify seemingly subjective design decisions.
Example: When Khan Academy introduced a “sneak peek” preview feature into some of their online lessons, user response was mixed. They decided to measure user preferences by running a 50/50 test on their site for a week with half of users seeing the preview features and half not. The graph below shows that the “no sneak peek” version converted at a rate nearly 30% higher than the one including the new feature.
Creating a Digital Marketing Experimentation Culture
A robust higher education analytics system seeks to measure both micro conversions or indicators that demonstrate prospects are completing their intermediate tasks (secondary website goals), and the more macro conversions that provide a truer picture of how well your primary goals are being accomplished. Once key conversion metrics have been determined, seek to identify specific pages, features or functions that are underperforming and choose a conversion-related element to redesign.
There are many online tools available to conduct A/B testing, including Google Content Experiments (a free service allowing testing of up to 10 versions of a single page but requires designing and coding variations), Optimizely or Visual Website Optimizer. Never stop testing small but frequent changes, establishing a calendar for rolling out new updates and an efficient process between designer and analyst to continuously measure results. Keep in mind that these results can be puzzling so look at many different metrics to deduce which change really happened. Also consider results of various user segments such as first-time visitors, particular geographic regions or certain student personas for design preference insights of prioritized demographics.
Some tests will deliver inconclusive results and some changes will work better in the long-term than the short-term because of general change aversion, seasonality or other factors. By maintaining an ongoing record of previously run tests you can go back to promising ideas and re-run tests, perhaps with tiny adjustments, to different results. General preferences will inevitably evolve over time but with smaller incremental updates it’s easy to reverse inaccurate interpretations of testing results.
Final Testing Considerations
When implementing design changes, don’t disregard other performance data like page load times that can strongly influence website effectiveness. Every revised page and design element should be thoroughly tested on as many devices and browsers as possible.
It’s sometimes preferable to target test to one segment, filtering out international traffic, mobile users or desktop users, for example. An A/B test’s “winner” should only be determined when there is high confidence in the results (usually a 90-95% chance of being accurate) with sufficient data and test duration to account for natural variations over time.
While iteratively changing and testing plays an important role in continuous improvement, know when and when not to use A/B tests. Strategic shifts involving new initiatives or organizational direction can’t be adequately tested in this manner, however you could employ a “smoke test” in which you release to a small user segment for a short time to predict outcomes.
Although big changes are more likely to yield larger conversion gains, it’s recommended for teams less experienced with testing to start with smaller elements that can be listed and prioritized. As you begin recognizing trends in your testing data, you’ll develop greater confidence in your understanding of the needs and expectations of your users. This shouldn’t prevent continued objective experimentation but it will give you a better idea of what to test and which variations are more likely to improve return on investment.
Has your school experimented with A/B testing on its website? What have been your biggest conversion gains or surprises?