This is part one of a series of case studies on how the biggest apps perform in-app user research. This series will help Product Managers/Researchers, or UX researchers better understand how they can leverage in-app feedback to speed up their rate of learnings.
At Qualli we want to learn from the biggest and brightest, but that doesn’t mean they’re always right. This week we’ll take on OpenAI’s ChatGPT app.
Let’s talk numbers first
OpenAI is the fastest growing company ever, reaching 100 million weekly active users in 2 months 🤯An estimated 2 million developers are estimated to be working with it.
They estimated that 15.6 million people installed the ChatGPT app in september alone 🚀And 500.000 users in the first 6 days after it launched in may.
So you can imagine why this makes ChatGPT such an interesting case to look at!
The Registration Loop
The Registration Loop Once the app is installed, you are greeted by a very slick landing page. The background animates, showing a variety of colors and different actions. I wait a couple of seconds here to look at it.
At the bottom, we are presented with 3 options:
Continue with Apple
Continue with Google
Sign up with Email
Log in
Let’s pick “Sign up with Email.”
Suddenly, an overlay with a web browser opens, which I find a bit strange, but not uncommon.
Just to try it out, I click “Cancel” in the top left corner and land back on the landing page. Let’s try the same thing a couple of times — so I click “Sign up with Email” and “Cancel” 5 times…nothing happens.
What if I just couldn’t sign up in the web view and kept trying until I was sick of it… Wouldn’t you want to ask me if you could help?
But okay, let’s continue — we fill in our email and credentials, hit sign up, and now we’re greeted by a “verify your email” screen.
Here is where it became interesting, but unfortunately not in the best way. It asks me to check my inbox; I do, but no email. Okay, maybe it will come later.
Let’s just try clicking the “I’ve verified my email” button at the bottom, maybe it works. I get presented with a pop-up telling me it isn’t verified. You can only try, right! But wait… the “OK” button on the popup doesn’t work. So now I’m stuck here, not able to close it in any way. So my next step is force closing the app…
…ugh. did no one really test that part?
Any other app would have lost me here, they really would have. But ChatGPT is the only one with this service, so I’ll stick with it.
I go and check my Inbox, still no mail — it takes this long?! I click “resend email”, but I accidentally click it twice -> Well hello, new popup with the same problem… This is a joke, right? Closing the app is again the only solution, testing my limits.
But now I see my email notification, finally, and I’m able to verify it. I click the link in the email, my browser opens (why not the app?!) and it tells me it’s verified. I close the browser, go back to ChatGPT, hit the button at the bottom, and I can proceed to the last steps.
First up its some minimal information on the user: First name, Last name, Birthday. No problems here!
Next up they present us with 3 screens:
Welcoming us, but also warning us. Strange, but these services are new, and misinformation has been spreading. So ok!
GPT-4 Upsell screen, no thank you.. not after that onboarding. So I opt-out
Explanation on usage with voice, short and informative.
Almost there! Last step asking us to choose the voice GPT has, I just pick the default one, sounds good enough to me.
Pfiew, we’ve done it! Let us discuss how this could have gone better. My overall feelings after this onboarding: frustration.
So, what did they do right? 👍
I like that the onboarding has a minimal flow. It doesn’t ask for a huge amount of input, only the basics.
It was informative in ChatGPT’s capabilities using the slides.
The design is excellent, is crisp and clear everywhere.
How could they have improved this flow?
The obvious 👎
Please for the love of god fix your pop-ups.
Ask me for my feedback
I identified 3 opportunities for to ask for your users feedback.
When the user cancels on the first step of registration (multiple times) or When the user cancels in the middle of their onboarding
When a user doesn’t proceed with their registration, or tries it multiple times, most likely something is off. Here I would simply ask “Are you having trouble signing up?”.
If not, one tap answer, if yes you can help them out immediately.
2. Detect rage taps
When a user starts tapping the screen with frustration in what are called ‘rage taps’, jump in!
3. Ask me about my onboarding experience when I finished
When landing on the chat home page, I don’t get the chance to give them feedback. I would have to manually go to their support and email them..
So ask the user for 1 minute of their time, let them tell you about their onboarding.
How are these an improvement:
In worst case (everything goes wrong): You give the users 3 touch points to ask for help, or give feedback.
In the best case scenario (everything goes smoothly): You still ask them how their experience was, making them feel heard.
On to the next part — Conversations with GPT
We land on our landing page, diving straight into a new conversation — nice and easy.
In a couple of weeks, I’m off to Vienna, so I enter the prompt: “I’m planning a trip to Vienna, and I would like you to make a three-day itinerary.” It takes a few seconds to load, and then it starts typing.
… I wish it would just give me the entire response at once, but I’m patient.
After 20–30 seconds, I get my list of to-dos and start reading. Everything looks great, and I’m pleased with it!
But let’s imagine, just for fun, that it was completely wrong — like, it starts rambling about Venice instead of Vienna. It doesn’t, but let’s just play along for a bit. 😉
How do I tell it the response is way off?? Using the web interface, you have a little thumbs up or thumbs down option, but I don’t see that here. So, I try tapping the screen…nothing happens. Maybe a long press? Ah yes, that opens the menu.
Here, I find the option to mark it as a “Bad Response 👎”. Let’s give that a shot!
The window closes, and a small notification pops up at the top: “Thanks for your feedback!”. And that’s it? Nothing more? You don’t want to know why I didn’t like it? I feel unrewarded.
Maybe I found it offensive, inaccurate, or aggressive…
How could they have improved this flow?
When the users indicates a bad result, ask a couple of questions around why. If you ever read the book “The Culture Map”, you know people are different everywhere. What someone finds offensive in India, is not the same as in the US.
I can’t imagine OpenAI not wanting to know the exact reason their response was bad.
But also, make your user feel heard, let them express theirselves. This gives them the opportunity to feel a sense of contribution. Making them more invested in your product and rewarding them for their contributions.
So when a user indicates a bad response, ask them for more feedback. They went through the trouble of wanting to tell you it was a bad response, they will also, most likely want to tell you why.
A pleasant surprise!
Most often use the voice assistant, as it makes engaging in hands-free conversation easy. E.g. while driving this is a very enjoyable feature.
Now I was given a very nice surprise! When you close your assistant manually, they ask you why!! 👏
Finally they want to hear from me!
This is what I’ve been looking for. It takes 2 taps to give my feedback, and I can help them improve my own experience.
Last part, what if I need support?
Right now, there is only one option really, use the help center. The help center can be accessed through the settings, a bit buried away.
You get dropped on a help center containing the documentation and FAQ’s for the iOS app. They do filter you on your OS type, that’s very nice. On there I can search, or send a message to support.
How could they have improved this flow?
Getting dropped in a support center isn’t always helpful, especially not if you have that many articles. So a small improvement could be to place a small filter in between on the category:
What do you need help with?
My account
Technical Issues
Billing
Other
Then when redirecting them to the Help Center, also filter on this category.
How could ChatGPT have used in-app surveys to improve their app? ❤️🔥
The ChatGPT app’s journey, as outlined in the article, presents several critical moments where user feedback could be instrumental in enhancing the overall experience. This is where in-app survey shine, offering solutions tailored to address these pain points effectively.
Here’s how Qualli could have been pivotal in improving the ChatGPT app experience:
Responsive In-App Surveys at Key Moments
Using our multi-step in-app surveys could be deployed at crucial stages of the user journey.
For instance, when a user repeatedly cancels during the registration process, a Qualli survey could automatically trigger, asking, “Are you experiencing any issues during sign-up?” This immediate feedback loop would not only identify problems in real-time but also show users that their experience matters.
Using Triggers For Immediate Feedback
Post-onboarding, Qualli’s trigger surveys could be used to ask users about their onboarding experience. This could happen right after the user completes the registration process, ensuring that the feedback is fresh and accurate.
In-depth Analysis of User Feedback
Beyond collecting feedback, Qualli’s tools for visualizing and analyzing feedback data would allow ChatGPT’s developers to understand common pain points, user preferences, and areas for improvement.
Tracking the User Feedback Journey for Continuous Improvement
With Qualli’s ability to track the feedback journey, ChatGPT developers could monitor how users interact with the surveys — from trigger points to completion or skipping. This tracking would provide insights into the effectiveness of the feedback mechanism and areas where it can be improved or optimized.