Smoke testing for user motivation

I quickly tested motivation for users to try a new product before anyone started building. This gave our team more confidence that what we wanted to build would be worth it.

Objective

Test motivation for current users to try a new product.

Approach

Smoke Testing for minimum viable products (MVPs).

Results

I found evidence to support at least some interest in the development of a custom analytics feature. As a result of this research plus an analysis pointing to low costs to develop and maintain the new product build, we decided to proceed with development.

my role
UX Researcher, Product Owner
project type
Solutions testing
project year
2021
client
Dashmote
research questions
Are current Dashmote users interested in a new custom analytics product?

Challenge

At Dashmote, we supply market data in a dashboard format to enterprise customers. Some of these customers, we found, were not making full use of the dashboards: they would instead open the dashboard, download the dataset, and then manipulate the data in MS Excel. My team found a solution for self-service analytics within our app, thereby enabling users to gather and manipulate datasets in one place, and then download their final result. Before developing this, we wanted to make sure our solution would actually be an attractive option for our users.

Are current Dashmote users interested in a new custom analytics product?

Team

I organized and conducted the research for this project myself. I put on both my UX Research and Product Owner hats for this project.

Process

Format

Testing whether customers are going to buy something in the future is tricky. I could not simply ask potential customers directly because they would usually give hopeful responses even if they are not interested, especially if they like us as a company or me as a researcher.

There are, however, ways to simulate a product offer to see how the potential customer responds. In this case, I borrowed a method from both digital marketing and product ownership to create a Smoke Test. The test was simple: I sent out a message to all Dashmote clients to advertise a new product in development, and then tracked how these clients responded to that message. I measured general growth metrics such as the number of people who

- opened the email message,

- clicked the link in the email to learn more, and

- leave any reactions to the article with more information.

Although the observational depth for a test like this is low (only a few measures can be analyzed), the ability to validate a hypothesis is high because this test can easily be administered to a large number of users and gauge behaviors pointing to actual use of the product (See Alexander Cowan’s approach to venture design which is based on the Lean Startup method). On top of that, this test helps to get around the problem of bias with asking potential users directly whether they are likely to buy a product in the future. They will respond to the test in their own contexts and without thinking much of the research team behind the test, so we can get a glimpse of how they might respond to the real thing later on. In other words, if people are interested enough to click to learn more about a product, they may actually buy it when it is ready.

Hypothesis

The next task was to determine how much interest would be considered enough for us to invest in developing the solution. The hypothesis was:

If we send a teaser message (email + article) to learn more about a new analytics tool to all users, we will see clear signs of motivation. 

Furthermore, we would recognize these signs of motivation by the following: 

  1. # users who open the email and then click to learn more > 1 (at least 2.4% of active users) --> this was a must-have for us to develop
  2. # users who sign up to participate in a beta test > 1 --> this was a nice-to-have for us to develop, but would indeed indicate strong interest in our solution
Sketching out the hypothesis for this experiment

These amounts may seem low (two people click in the email and we build it!), but compared to general industry marketing benchmarks, it was about the best we could expect. For example, Campaign Monitor reported on 100 billion emails sent globally between January and December 2020 and found that the average click-through rate (CTR) once the emails were opened was 2.6%. For IT/Tech/Software Services in particular, those totals were around 2.8%. Mailchimp found in 2019 that the average CTR across industries was 2.62%, and for Software and Web Applications, the total was 2.45%.

See Ultimate Email Marketing Benchmarks for 2021: By Industry and Day by Campaign Monitor

See Email Marketing Benchmarks and Statistics by Industry by Mailchimp from 2019

The sources listed above however do not distinguish the particulars of the segments tested. For example, whether these sources distinguish B2B statistics or whether the statistics look differently between customers intended for acquisition vs. current customers is not entirely clear. With this in mind, I expected some variance from these percentages when attempting to replicate these results. A related benchmark might be CTR for Google Search ads, which goes to all customer types who already have a minor vested interest in what is being searched (similar to Dashmote’s current clients having a vested interest in what we may be developing as they may be able to benefit from it). Google Search CTR for B2B was 2.41% as of October 2020, and for Technology it was 2.09% (5). Related metrics are indeed different, including when segmented by different industry types, but are still within one percentage point of another.

See Wordstream, last updated April 23, 2021

I therefore used these benchmarks as a starting point to gauge interest in our potential new product. This would mean that of the current monthly active users, two users would need to click on the call to action within the email for the hypothesis to be supported.

I decided not to track email opens since the subject line would only alert the recipient generally to a new feature; it did not give particular information away about the feature itself.  

Regarding the second hypothesis condition on the number of users who sign up to participate in beta testing, I decided to be conservative and to accept support if more than one user signs up.

User Journey and Test Material

The client's user journey during this test started with an email campaign announcing that we are in the beginning stages of designing a new feature and want to give a sneak peak. The message teased a small bit of information to help the user understand the general purpose of the new feature without giving away too much information so as to entice them to click the call-to-action to learn more. Clicking to learn more would be a key measure of interest for this test.

Alternatively, clients could see a new card appear once inside the Dashmote application. This card brought the client to the same article as in the email.

If the client arrived at the article via route outside this plan, say by a Customer Service Manager asking them to check their email for our message, this would not affect the outcomes of this test since it would still be up to the client to click to learn more within the email.

Clients then read the article for more information about the new feature and viewed screenshots of the prototype. If they felt highly interested in the feature, they clicked to sign up for beta testing at the bottom of the article. This button took them to a beta testing sign up form.

I used Intercom for all the messaging and Google Forms for the beta testing sign-up form.

User journey during this Smoke Test

Screenshot of the email message in Intercom

Screenshot of the article in the Help Center

Screenshot of the beta testing sign-up form

Logistics

I ran the test for one week, and managed it 100% remotely. This test was sent out to all current users who had access to a dashboard.


Metrics and Questionnaires

The following list of metrics were analyzed for this test:

  1. # article views
  2. # article reactions and % for each sentiment
  3. # participants recruited to participate in beta testing


Limitations accounted for with this method

There were two main limitations expected when starting this test. The first was that our subscription to Intercom only provided email messaging to send from an address generated by Intercom rather than the sender’s real address. An expected result was that some emails would not reach the users at all, but would go directly to spam. 


Intercom-generated email address

This test was launched regardless and used as a baseline outcome by which to measure outcomes for later Smoke Tests. In other words, at Dashmote, no one had tried this method for testing motivation before, so my team was willing to take risks here. A complete failure to deliver emails would not have been a major setback either. To understand motivation overall, we were already planning to triangulate reactions using concept testing.

The second limitation was that I did not provide an incentive to users to sign up for later beta testing other than the value of contributing to Dashmote’s growth, which would provide a long-term return on investment into their product. I knew that by not providing a monetary incentive, I may not see many or any users sign up, but in any case I could gauge how important incentives are to our users for later studies.

Results


Of the users who received the email without issues (no bouncing), 18% opened the email, and 3.5% clicked through to read the article linked in the call-to-action at the bottom of the email. One user read the article but unsubscribed from future emails. Excluding this user, the total number of users who opened the email, clicked to “Learn more,” and read the article was 3.2% (this user is also excluded from the analysis continued below).

18% opened the email and 3.2% clicked the call-to-action

Clicks came from six different client companies and eight different countries.

Users who clicked to read the article came from multiple countries, including Switzerland, Korea, Australia, Singapore, the Netherlands, the United Kingdom, and Germany.

About two-thirds of the clicks to the article also came from users representing the main user persona my team would target with this solution: Ava the Market Tactician. At least one user representing each of our other personas also clicked the CTA. Click here to read more about these personas and how my team created them.

Proportion of clicks coming from users who represented our personas




Most clicks to the article came from users who had visited the Dashmote app within the past two months. Of these users, about 36% had visited recently (within the past 30 days). The recent users who clicked to view the test article represented about 5.3% of the total active users.


All users who viewed the article came via the email message rather than through Dashmote application cards. o reactions from external users on the article were recorded.

Lastly, there were no responses to sign up as a beta user. It was not possible to track the number of users who clicked the article CTA but did not sign up.



Findings

Overall, the results of this test provided evidence to support at least some interest in the development of a self-service analytics feature. 

  1. Hypothesis condition #1:  # users who open the email and then click to learn more > 1 (at least 2.4% of active users) → SUPPORTED
  • The total number of users who clicked on the email call-to-action and did not unsubscribe was more than one person, and represented 3.2% of the total email recipients (0.8 percentage points higher than the expected 2.4%).
  • Of this total, 5.3% were active users.


  1. Hypothesis condition #2: # users who sign up to participate > 1 → REJECTED
  • 0 users signed up. 


There was interest from varied locations and companies. Users who clicked to learn more about our solution came from a broad spectrum of locations and companies. Representing six different companies and eight different countries, this shows that our solution sparked interest across geographies and business goals.


Ava was the most interested persona. Users representing our primary persona for this solution showed the most interest. This meant we were indeed building for the right person.


Passive beta recruitment was ineffective. According to this test, users are not motivated by this feature enough to participate in beta testing with only a passive prompt from an article. Recruitment for later beta testing will likely require incentives, or at least asking current clients directly to schedule time with us relying on a small amount of social pressure.


A 24-hour activity window is enough. Nearly all activity during this test occurred within the first 24 hours of launching the campaign. For future campaigns, we will likely be able to shorten the duration of the test period by 1-2 days. We should not, however, only give a 24-hour window; it is important to see when activity plateaus as well.


The beta testing sign-up message needs improvement. A possible reason for the lack of sign-ups for beta testing may be the phrasing, “...we can use to schedule a session with you.” When I presented the results of this test to my team, a couple of people mentioned that as users of other products, they enjoy trialing new features but do not enjoy the idea of someone getting into contact with them, or of participating in a long session. It is possible that our users felt the same barrier to signing up, so in a following attempt to recruit participants, we will remove this wording and simply ask for the email address. I also could not track clicks to sign up for beta testing with Google Forms or Intercom, so I did not know whether users clicked to sign up but did not complete the survey. As a result I may have missed another lens by which to measure highly-engaged interest.


Conclusions

As a result of this test, I recommended to proceed with developing the solution. These results were ultimately presented alongside positive results from a technical proof of concept (to show development and maintenance costs) as well as additional concept testing. We ultimately decided to develop the solution.

check out my Other projects