I'm at the Mobile Health Expo in the Ceasars Palace Convention Center in Las Vegas. Having succussfully maneuvered past Barry Manillow, Cher, Donny & Marie Osmond, and 20,000 slot machines, I encountered NoMoreClipboard and their interesting use of PHRs with low income diabetes patients.
Functionality Looks pretty straightforward—view portions of the medical record, enter blood glucose, send prompts to patients and to physicians. Uses desktop web, mobile web, and SMS
Implementation They’re conducting a pilot w/ Howard University Hospital Diabetes Treatment Center in Washington, DC, in neighborhoods that have a high incidence of diabetes. Patients are typically low income and either Medicaid or uninsured. The program begins with community-based screening and initial treatment in a tricked-out RV (aka mobile health clinic). In the RV, their information gets entered in Howard’s EMR.
When they get off the RV, the patient is greeted by a “PHR Educator” (first time I’ve heard that term). The PHR Educator gets them set up with an account, which includes downloading their data from the Howard EMR system into the PHR, as well as filling in gaps in the data. Patients are encouraged to enter their glucose readings several times/day. About every three months, they collect HEDIS data from patients via online surveys.
They have 232 patients using the system so far. They've only recently launched the mobile component, and it's growing fast.
Incentives For the patients: The program provides “medical minutes,” essentially subsidizing part of the patient’s data plan. But they’re very clear that they don’t want to pay for everything. The patient needs to pay for at least part of the phone and part of the data plan, to ensure they have skin in the game. The program will provide new phones, or patients can use their existing phones.
For the physicians: Cupcakes. They’ve succeeded in getting clinical buy-in by providing cupcakes. Every time a physician gets another 20 patients signed up, they get a hand-delivered box of Georgetown Cupcakes, which are evidently delicious.
Preliminary findings & observations
Not all the patients use the system, but those who use it tend to use it a lot. This is consistent with findings from the California Health Care Foundation study that showed low PHR adoption by people with low socioeconomic status, but high usage by those who do adopt.
Age 60 appears to be dividing line—over 60 they tend not to use it. This is different from other data I’ve seen, which shows 70 or 75 as a clearer dividing line
1/3 use once/week or more; 1/3 use PHR at least once a month; 1/3 rarely use it
MDs report enhanced dialog between patients and providers. Communication is more frequent, complete, and accurate
They’re claiming reduced HA1C, BP, cholesterol, ER visits, and hospital readmissions, but didn’t provide any specifics (to be published, but not yet)
In his TED Presentation on data visualization, David McCandless touches on information overload (starting ~16:38), suggesting that data visualization is one tool in our battle with information overload--that good data visualizations enable us to take in data through our eyes and process it in our brains much faster than similar amounts of data communicated through text and numbers.
Bits are heavy. Though they have no physical weight, bits--the electronic data that flows in and out of our e-mail inboxes, cell phones, Web browsers, and so on--place a weight on anyone who uses them. A laptop computer weighs the same few pounds whether it holds one e-mail or a thousand, but to the person who has to deal with all those e-mails, there is a big difference. Appearing in large numbers as they often do, bits weight people down, mentally and emotionally, with incessant calls for attention and engagement....
The problem can be solved by learning bit literacy, a new set of skills for managing bits. Those who attain these skills will surmount the obstacles of overload and rise to the top of their professions, even as they enjoy a life with less stress, greater health, and more time for family and friends. Bit literacy makes people more effective today, even as it equips them for the future.
Mark points out that you can read every day about the information overload problem, but it's very difficult to find practical help dealing with information overload. So his book, Bit Literacy, provides elegant, practical techniques for just that, most of which involve filtering, prioritizing, and organizing incoming data.
I see an intriguing connection between data visualization and bit literacy--an underlying suggestion of a powerful technique that I'll call "compression." Think of it this way:
When a program like WinZip or iTunes compresses a file, it creates a new file that contains most or all of the source information, but using fewer bits to represent that information.
And data visualization does the same thing. A good data visualization takes a large amount of data, either qualitative or quantitative, and displays it in form that conveys most or all of the source information, but using fewer bits to represent that information. This suggests the notion of "compression" as one technique for dealing with information overload.
A few compression examples come to mind.
In the last few years, management "dashboards" have started proliferating. These dashboards essentially take a large amount of information about how a product or company is performing, and compress it into one or two pages of charts, key performance indicators, and short explanatory text. This compressed version of the information enables a manager to quickly take in a tremendous number of bits very rapidly.
Design personas fulfill a similar function. We start with mountains of data from many sources to understand our customers and their needs, and we compress that data into a small number of composite characters called personas. Then we use those personas to communicate with the project team and stakeholders. Essentially, we create compressed versions of the data.
Both of these are examples of "lossy" compression. In the world of compression, "lossless" compression means the compressed file contains all of the information from the original--it's just stored more efficiently. When you download a software application, that software is typically stored in a lossless format, so that when you decompress it, you get all the information of the original. Contrast this with "lossy" compression, in which the compressed file is both smaller, and takes up less space, than the original. This is what you get with an mp3 audio file--you can still enjoy the song, but some of the audio fidelity has been removed so you can fit more songs on your iPod. The trick with lossy compression is to systematically determine a) how much fidelity is required, and b) which data can be removed while still retaining the key information.
Back in our information overload space, this becomes the key question--how can we systematically reduce the bits coming at us so that we can send and receive the essence of a large data set while retaining the key information we need to make informed decisions.
One more example highlights the potential power, and the risk, of using data visualization to combat information overload:
A stock ticker widget essentially compresses all of the data about stock trading into a handful of numbers. After millions of trades today, the Dow Jones was up 1.2%, ending at 10,603.54. This is an attempt to compress not only the stock market, but the economy as a whole. If the Dow is at 10,603.54, the economy is probably better than it was last year, but still struggling.
So the stock ticker saves me the trouble of having to look at all of the data about today's trading. This is good. On the other hand, when there's a TV screen in my elevator barraging me with data about how the Dow, NASDAQ, and S&P 500 are changing from one minute to the next, that's way more information than I need or want. Some further compression would help. As in software compression, it's not only a question of which data to keep and which to remove--it's primarily a question of how small I need the compressed version to be. In the case of a typical consumer, we could add information and compress it even more by presenting a weekly updated graph of performance over the past 10 years.
So I'm having fun playing around with this metaphor, and I have three main questions:
1) Who else has written about compression and/or data visualization as a means to combat information overload? 2) What are some more examples of compression being used effectively to combat information overload? 3) How might we apply this concept in fresh ways to make ourselves more productive and happier each day?
For those who do not currently have meaningful access, but who could get meaningful access as a result of our efforts, we might think of two complimentary paths:
Bring the people to the technology
Bring the technology to the people
In the first case, we're changing the people. In the second case, we're changing the technology.
By "changing the people," I simply mean finding ways to help these folks take advantage of tools others already have. For example:
Public access computers in libraries, medical centers, etc.
Subsidized access (e.g., some health plans give away cell phones with unlimited minutes for interactions with the health plan)
Training on how to purchase, use, maintain, and troubleshoot
In the second case, "bring the technology to the people," we're changing the technology, content, and functionality to make it more accessible, appropriate, and useful to people. For example,
Change our push messages from phone and email to SMS
Optimize existing web sites for access on pocketable devices
Convert key Web interactions to work on IVR (touch-tone telephone trees)
For folks who don't have meaningful access and who won't have meaningful access regardless of what we do, when I blogged a couple of days ago I left off what could be a key strategy:
Use higher end technologies with other people so as to free up more traditional resources to attend to the needs of those who don't use those technologies. Here's a way to think about it: If we can use the web to save phone calls to a call center, that should free up call center resources. We would then need to deploy those call center resources to better serve the needs of the people who don't use the web.
I'm liking this basic approach of organizing our strategies based on meaningful access. But I also have a suspicion that we might do better to simply look at age and socioeconomic status (income & education). There's a ton of data out there, and the trip remains finding ways to simplify our approach while respecting the integrity of all that data.
Many of the same populations that suffer from health disparities also have lower Internet usage. How can we use the Web and mobile technologies to close the gap between the haves and have-nots, rather than increasing the gap?
As always, we need to start by understanding the people involved.
There are many ways to describe the people likely to get the short end of the stick in terms of health, healthcare, and technology. The most obvious are:
Psychographics (e.g., those deeply engaged in their health vs. those who don’t pay much attention to their health, or those who love the latest gadget vs. those who fear computers)
Access to Technology (e.g., those with desktop broadband vs. those with dial-up, vs. those with smart phones, vs. those with cell phones & SMS, vs. those with none of these)
The problem I keep bumping into is that these factors overlap in very complex ways, and all simple approaches to segmentation seem to oversimplify way too much. For example, Hispanics are more likely to have adverse health outcomes than whites, and they’re also less likely to have broadband, but they’re more likely to access the Internet on their phones. Does this mean we can use smartphones to decrease health disparities for Hispanics? Not necessarily—I'd guess that the Hispanics suffering most from health disparities are those least likely to have smartphones.
Four years ago, in the report Expanding the Reach and Impact of Consumer eHealth Tools, Cynthia Bauer and colleagues at the Dept. of Health and Human Services did an impress job of researching, analyzing, and organizing the field of eHealth Disparities. One of their main conclusions was that we needed more data at the subpopulation level. That gap in our understanding has closed a little in the last four years, but we're still struggling to understand the individuals most at risk of being caught between health disparities and digital disparities.
That said, I think we're close to having a practical starting point.
As we think about strategies to address eHealth Disparities, we might find it helpful to start segmenting in terms of meaningful access to technology. “Meaningful Access” refers to the need to have more than just a computer. Meaningful access requires:
hardware
Internet connection
skills to use them
ongoing technical support
relevant useful content and functionality
If we take the people most vulnerable to health disparities and subsegment them by meaningful access, then some high-level strategies start to emerge:
The “haves”
Those who already have meaningful access, or those who will gain meaningful access in the next few years with or without our efforts.
Strategies:
Promote existing content and functionality to them
Enhance current content and functionality to be more useful to them
Create new content and functionality for them
The “could haves”
Those who don’t have meaningful access, but who could gain meaningful access as a result of our efforts.
Strategies:
Use the same strategies as for the “haves” above, and also…
Support public access points (libraries, medical centers, shopping malls, etc.)
Support simple and inexpensive access on devices they already own, e.g., SMS texting, including paying the per-message fee
Support public policies and funding that increase access for the underserved (e.g., community-wide wi-fi, extend universal access programs to cover not just phone but also Internet)
The “won’t haves”
Those who don’t have meaningful access, and who still won’t have meaningful access 3 years from now regardless of our efforts.
Strategies:
Support “infomediaries” such as family members who use the Web and mobile devices on behalf of those who don’t
Maintain and enhance non-technology-based services
This might be a starting point. The next step would be to gather more information about each of these groups to understand whether these groups are homogeneous enough to have similar needs that can be addressed with similar efforts.
I've been doing online consumer health for over 15 years, most of it with Kaiser Permanente. As I think back on some of the key capabilities that were originally visions on the far horizon and are now simply part of the landscape around us, I remember when each of these was "the next big thing."
health information previously available only to professionals, made widely available to consumers
health risk assessments with personalized feedback
online appointment requests
online prescription refills
online appointments booked in real time
select a physician online
apply online for coverage
email my doctor
secure messaging with my doctor
view my medical record
Each of these is now everyday reality to millions of Kaiser members. We've reached these horizons and moved on to the next. So what's next? When someone asks me, "What's the next big thing," I usually end up talking about two areas:
A better user experience
Broader reach
Despite lots of powerful and valuable possibilities for new functionality, from personalization to portable medical records to home monitoring, I think the biggest value to individuals and society will come from improving the user experience of the current functionality, and making that experience available to a broader audience.
1. Better user experience We need to take all the capabilities we've already implemented, and make them...
easier to use
more integrated
better adapted to real-life scenarios and tasks of our users
2. Broader reach Over 3 million Kaiser members use the powerful tools we've provided. That's not nearly enough. In addition to increasing the number of web-using Kaiser members who use this stuff, we need to expand these tools to...
people who are traditionally underserved by the healthcare system
people who don't have easy access to PCs with broadband connections
people whose physicians aren't currently part of an integrated group practice
The Mobile Factor Cell phones won't take us all the way to these horizons. But they can certainly help us get there. In terms of user experience, mobile devices can make simple transactions ridiculously easy, and they can fill in the gaps between in-person, telephone, and desktop web interactions. If we do it right, mobile interactions will become a lynch pin of ubiquitous, integrated, cross-channel experiences.
Not only can mobile devices support much better experiences, it's getting clearer all the time that they can help us extend these services to people who are traditionally left behind by the latest technology. If we do it wrong, our mobile efforts will just exacerbate the already shameful chasm between the haves and have-nots. But if we do it right, and I think we can, we can use mobile technologies as a powerful tool in shrinking that gap.
Every journey to a great user experience starts when someone somewhere empathizes with a user in a way that is authentic, human, and compelling. It starts when someone realizes that the most important person to pay attention to is not the person paying for the project or implementing the software, though these folks are certainly important. It starts when someone really, truly, deeply pays attention to the person who will end up using what's being created. User experience starts when someone starts caring about the user.
Next comes humility.
Anyone who has watched usability testing or ethnography or a focus group knows the feeling. I remember my first usability test - I had spent weeks with a talented team designing a great user interface. All we needed to do was validate the design with some end users and possibly make some tweaks. An evening of usability testing provided a huge heap of humility. Labels that were intuitive to me were enigmas to the users. Buttons that were totally obvious to me were invisible to users. Most of the people I showed it to couldn't even figure out what it was supposed to be.
Sometimes we argue with the first few people who have trouble with a design:
"This first test participant is an outlier." "The second test participant isn't a good representative of our customer base." "The third test participant must just be stupid."
But by the time we get to the 4th user who can't find their way through our design, we start to realize, maybe the problem isn't with the users. Do this enough times with enough designs and we learn a very healthy sense of humility. Two quotations that help keep me humble:
Several years ago we were testing a new home page design. We brought in several users and gave them what we thought were a handful of simple tasks to complete. By the time we got to the third user I was already starting to hear the humility music playing, and it came to a crescendo when the moderator asked, "so where would you click?" and the user responded, "I'd click over here on 'careers' so a could get a job with these bozos and fix their damn web site!"
And I remember Haim Hirsch, my first user experience mentor, who had conducted hundreds or thousands of usability tests, tell me, "I've never conducted a test where I wasn't surprised by something." So that's my goal - to be humble enough to assume that my design has major problems, and to be grateful to the users who will help me find those flaws.
Empathy is about caring. Humlity is about knowing that I don't know everything. And moxie is about knowing that I do know some very important things, and in some cases I know more than my users do. Moxie means that I sometimes contradict the users. It means I have the expertise and the larger vision to create something they hadn't thought of or to do something in a way they would not suggest. We need to conjure up some moxie, because users are great at identifying problems, but not necessarily always great at identifying solutions. User needs should drive all key decisions; user design suggestions should get added to the list of possibilities.
Moxie in the absence of empathy and humility is a recipe for disaster. But in combination with them, moxie enables us to break paradigms, innovate, and create solutions that meet our users' needs and desires better than anything they could have designed themselves.
Empathy - I care Humility - I don't know everything Moxie - I know something
In a dark conference room, the vendor lays out a simple user scenario and starts walking me through the experience. But before he gets to the second click, he needs to mention the cool feature that he isn't clicking on, which leads to a short discussion of the underlying technology, which evolves into a few key points about their design process. When he finally finishes telling me about their unique value proposition and returns to the screen, I can't remember anything about the scenario, which is irrelevant anyway, because now he's running through the menu bar telling me about each of the features.
The thing that's screwed up about this story is that the story gets lost.
Stories are perhaps our most powerful tool when demo'ing software. As human beings, stories grab our attention--we can't help it. Stories are sticky. They stick in our minds much more than explanations, descriptions, or data, and once they get into our brains, all kinds of powerful things start to happen. We intuitively fill in gaps in the narrative; we make connections with other parts of our lives, and--very importantly--we remember. If you want your audience to remember what you said, tell a story.
A good story tends to have some typical components that can all be included quite easily in a software demo, even in a short demo that lasts only 60 seconds. Be sure to include:
The main character, your user--give him or her a name (let's use "Bored Bob")
The setting/context--when and where is Bob using the software (Bob's at home, sitting on his couch with his laptop on the coffee table and the tv on across the room)
The central conflict--every story needs a conflict, and for demos, the conflict is usually a problem the user is having that the software will help solve (Bob wants to go to a movie, but he doesn't know which movie to see)
The plot--what happens (Bob googles "movies near me" and clicks on my web site. He starts by browsing through a list of new releases, gets drawn into a couple of previews, purchases a ticket for the one he wants, and buys the soundtrack on a whim.)
The resolution--in a demo, this is the "so what" part--what are the benefits to Bob and to the owners of the software? (Bob loves his movie and finds a new favorite musician on the soundtrack; my movie site grabs a customer that would have otherwise gone elsewhere, and we make an extra $5 profit from the soundtrack sale)
Five elements sounds like a lot, but it doesn't have to be. Here's what we ended up with in my simple example:
Bored Bob is at home, sitting on his couch with his laptop on the coffee table and the tv on across the room. He wants to go to a movie, but he doesn't know which movie to see, so he googles "movies near me" and clicks on my web site. He starts by browsing through a list of new releases, gets drawn into a couple of previews, purchases a ticket for the one he wants, and buys the soundtrack on a whim. Bob loves his movie and finds a new favorite musician on the soundtrack. And my movie site grabs a customer that would have otherwise gone elsewhere, and we make an extra $5 profit from the soundtrack sale.
A good demo of my web site will tell this story very early on and, very importantly, without interruptions. If I stop the story to explain a feature or why we did something the way we did, the story gets lost--much better to tell the story beginning to end, simulating Bob's experience on the screen, and then go back for explanations and details. When I'm done with the story, I can talk about the incredibly cool matching algorithm we developed to push the right movies to Bob, and the subtle up-sell techniques that prompted him to buy the soundtrack, and the projected increase in revenue due to these enhancements. Those are important things to tell my audience about, but they should never detract from the star of the show.
The star of any great demo is the end user experiencing the product.
Last week I spent 3 days in storyland. I was in Chicago at a meeting of the Innovation Learning Network, focused on the use of stories in business and innovation. I listened to lectures on the psychology of stories; played games with a Hollywood script writer, a radio show editor, and an acting troupe; and spent a morning redesigning airport security procedures.
It got me thinking about the interaction between stories and data.
Ideally, stories and data have a symbiotic relationship. Stories add richness, context, and meaning to data while data adds richness, context, and proof points to stories. Used together well, they are a powerful combination for telling the truth about the past, present, and future. But if we get sloppy, they can undermine each other.
I think there are two ways that data supports stories:
Data helps us figure out which story to tell
Data helps us tell the story
These are two very different uses. Confusing one for the other can have tragic consequences.
Data helps figure out which story to tell First, data helps us analyze reality to determine what's actually happening in the organic stories of real life. By collecting and analyzing data, we begin to piece together a narrative that represents what's going on--who is doing what? how often? how many? how consistently? etc. That's what data analysis is all about--good data helps me understand reality.
For example, if I want to tell the story of how someone uses my web site, I can look at data about user demographics, site traffic, user reactions, and user behaviors. This data enables me to construct a story that accurately represents something real. It may also enable me to create a vision for the future that is grounded in today's reality, or to describe a future that is demonstrably different from today's reality.
With a foundation of good data analysis, I can then create narrative stories that describe reality to my audience in ways that are accessible, compelling, and memorable.
Data helps us tell the story Once I know which story to tell, I can use data to improve the impact of the story I'm telling. Data enhances the narrative, adding proof points to make it believable, adding context to make it understandable, and adding details to make it compelling.
The pitfall Problems arise when I mix up the two uses of data. If I'm telling a story, and I grab some data to make my story more compelling, but I have not yet used data to make sure I'm telling the right story, then I'm in trouble. For example, let's look at data and stories about global climate change.
If I want to tell a story about the relationship between human activity and the climate, I need to start by examining the data in order to figure out what's real. If I'm paying any attention at all to science, I will conclude that the story to tell is one in which humans are driving the earth to the brink of climate catastrophe. I can then use data, both qualitative and quantitative, in the telling of that story. E.g., I can cite average temperatures rising, thinning ice sheets, oceanic acidification, and displacement of species. All of these will support my story. These data make my story more accessible, compelling, and memorable.
However, if I skip the first step (using data to determine what is real), then I may tell a story in which global warming is a crazy liberal plot, and I can still use data to support my story--it snowed in April this year; global temperatures historically rise and fall; etc.
I think the critical step is for a storyteller to be self-aware of which of these two roles data is playing. Am I using data to figure out which story represents reality, or am I using data to support the story I'm telling? Both are important, and both are powerful. But if I do one without the other, I either end up with a less compelling story, or I'm distorting the truth.
As it becomes easier to interact digitally, there will be plenty of opportunities to see your doctor without the two of you being in the same room at the same time. In many instances this will be more convenient, with equal quality, and with potentially lower costs to the health care system.
On the one hand, a virtual visit can be a fabulous thing all around, but there are certainly times when an in-person visit is more appropriate. So the question I'm asking myself lately is,
How do we decide between an in-person visit and a virtual visit?
The easiest, and possibly best, answer is, "Let the patient choose." This initially looks like the patient-centered approach. And we essentially let the patient choose today in most situations, as patients choose whether to call on the phone, email, or schedule an in-person appointment. But I'm guessing that approach oversimplifies the situation.
There will certainly be situations in which the health care provider should strongly recommend an in-person visit, even when the patient initially prefers a virtual visit. Possible criteria for an in-person visit include:
a physical examination is needed
an emotionally intense decision needs to be addressed
the patient and their primary physician don't yet have a solid relationship, and an in-person visit could help establish rapport that could then be carried over into future virtual visits
Conversely, what are the indications that a virtual visit is more appropriate? When should the health care provider strongly recommend a virtual visit? Here are a few possibilities:
the ordeal of traveling to the clinic would be unhealthy
biometrics are needed that can be more accurately measured when the patient hasn't just spent two hours walking, riding buses, and being quizzed by people in white coats
time is of the essence and a virtual visit could take place sooner than an in-person visit (this criteria could be applied, e.g., in rural areas or anywhere that has long distances between patients and providers)
These are some starting points, hopefully raising interesting and useful questions. Here are a few:
Who has done work on this problem? I'm particularly interested in any efforts to quantify the analysis.
What metrics can be used to assess the relative value of an in-person or virtual visit? The obvious starting places are quality, cost, and satisfaction, but we would need some way to measure these across a wide variety of contexts, and I fear the measurement would quickly get too complex.
How can we apply the tools of human-centered design to these questions? E.g., are there opportunities for rapid prototyping of a decision tool to choose between in-person and virtual?
How can we ensure patient safety while experimenting in this space?
What are the implications for pricing of care? Should all virtual visits and all in-person visits be covered, or only those deemed "appropriate"?
Has anyone attempted to bake this decision into a clinical guideline?
Telephone nurse triage systems currently have similar decisions built into their protocols--can we take care of this person over the phone, or do we need to see them? Care models that use telephone MD visits are also relevant. How can we expand what we've learned from these experiences to address a new type of interaction that is typically a less rich than an in-person visit, and more rich than a phone call?
I can't help but come back to the original, simplest answer: an in-person visit is appropriate when the patient wants an in-person visit. The challenge for health care providers is the same challenge as usual: how can we apply our knowledge, expertise, and compassion to help people make sound judgments in the face of uncertainty? Despite the complexities of reimbursement, diagnostics, relationships, and technology, the patient should get to decide. How can we best help them with that decision?
Everybody’s talking about mHealth, the use of mobile devices to improve health and healthcare.
“mHealth will replace the web”
“What’s your mHealth strategy?”
“How does SMS map to your mHealth initiative?”
But I think the focus on mHealth may be misplaced. Mobile devices will certainly enable us to increase the reach of our current efforts, and they will help us do valuable things we couldn’t do before. But focusing on the mobile devices themselves creates a big, all-too-familiar risk.
We risk basing our strategy and actions on the technology, rather than focusing on our users and our mission, and then looking for ways that technology can help.
So I propose a new buzz word to replace mHealth:
Let’s talk about uHealth.
I saw the phrase knocked around a bit a few years ago, but it didn’t stick and there doesn’t seem to be a commonly accepted definition, so I’ll try to bend the jargon to my message. Here’s what I mean by uHealth:
uHealth means health and healthcare that is…
ubiquitous
universal
user-centered
Mobile technologies can help with all three.
Ubiquitous Health
The most obvious benefit of mobile technologies are that they can help promote health anytime, anywhere. We can connect people with their data, their care team, their peer support, and their information regardless of where or when they are needed.
This isn’t ultimately about having access when you’re on the road. It’s about having access on the road, in the clinic, at home, at work. It’s about access via my iPhone, my home phone, my PC, my television, my physician in person, my self-care book, and any other mechanisms that I choose. Mobile health shouldn’t stand alone—it should be an integral part of a larger ubiquitous network of support.
Universal Health
How can we use mobile technologies to extend the reach of our health support systems to those who are currently left out? In the near future, more people worldwide will use mobile devices than tethered PCs to access the Internet. Key underserved groups currently use mobile phones more than desktop PCs, and they use them to stay connected with the people, culture, and information they value.
So as we plan what to do with mobile, we need to be very clear from the beginning that this is so much more than an opportunity to give rich people even more—it’s an opportunity to level the playing field, cross the digital divide, and reach the people who most need help. This sensibility needs to impact everything from feature choices to technology platforms (e.g., texting vs. iPhone apps) to payment structures (how much does that data plan cost? Who pays for the SMS?).
User-Centered Health
It’s not mobile-centered health or technology-centered health—it’s user-centered health. As we look for strategic and practical ways to take advantage of what mobile offers, we need to see mobile in the context of a person’s everyday life. What’s important to each individual? How can this fit into their daily workflow? What big problems to they have that mobile can help us solve?
The principles of user-centered design will ultimately be our most powerful tools for helping us leverage technologies to improve health. If it doesn’t work from the perspective of the consumer/patient/end-user, then it doesn’t work.
And that’s my soapbox proposing the increased use of a new buzzword. mHealth can only become valuable in the context of uHealth. Mobile initiatives need to support health that is ubiquitous, universal, and user-centered.
A few days ago I said to a discouraged colleague, "the user always wins." She said, "not this time--when they made the decision to prioritize business needs over user needs, the users lost."
But I stick to my assertion. When all is said and done, the user wins all arguments, because the user makes the ultimate decisions that drive business value.
Users decide whether or not to buy
Users decide whether or not to recommend you
Users decide whether or not to return
Users determine your brand
So ignoring user needs for the sake of ROI is the ultimate false tradeoff. It's sometimes possible to gain some short-term ROI this way, but in the long run, it's the users who will determine your success or failure. If you only look as far as next week or next month, it may look like ROI can be achieved at the expense of the user experience. But if you look out to the long term viability of your business, it's a fool's bargain. ROI and user experience are forever intertwined.
Business value derives from, and is usually dependent on, a great user experience. When all is said and done, the user always wins.
We were scheduled to go live in just 4 more weeks. When we finally did serious usability testing, we found out the application was barely usable. Users didn't understand the paradigm we were using, couldn't find their way around, and were generally frustrated with the whole experience. When we described what we were trying to give them, the loved it--they just couldn't use the product we had designed.
So naturally we delayed the go-live date several weeks to give us time to fix the usability issues.
Hah! That'll be the day!
We went live, celebrated the launch, and then spent the next couple of years training our customer service reps to respond to complaints while we built the next round of products. Last I checked we were hoping to fix the problem "in an upcoming release tbd."
It will be a watershed day when we do actually delay a go-live date in order to fix a usability problem. It will be a sign that we've finally figured out how to get the fastest return on investment.
I contend that the faster we rush to market with poor UX, the longer it will take us to get our money's worth out of the product. The easiest time to fix a user experience and fix it fast is before going live. Certainly it's important, critical even, to optimize the experience once the product is live, but basic usability problems in a production environment take a whole lot longer to fix than the same problems before going live. Before going live, there's a magic combination of:
a focused team, deeply familiar with the product
an intensity in the drive to completion that this product will most likely will never see again
few or no encumbrances related to an existing user base already familiar with a flawed product
Use this combination to drive solutions prior to going live, even at the "cost" of delaying. I put "cost" in quotations, because of the fundamental premise of user-centered design:
Principle #1: Business value derives from, and is usually dependent on, a great user experience.
This all leads to...
Principle #2: Rushing to go live with a deeply flawed product delays the release of a user experience that will produce the business value we're funded to produce.
That's what makes the UX vs. time-to-market argument a false tradeoff. One sign of a maturing organization is a willingness to delay go-live in order to fix a bad user experience.
Of course, a sign of an even more mature organization is to never have to act on that willingness, because we never get close to our go-live date before discovering and fixing the user experience. So that brings us to...
Principle #3: The way to make "Principle #2" moot is to deeply engage with users from the very beginning.
Technical complexity hurts the user experience. Technical complexity makes it more difficult to change, optimize, enhance, and scale your design. In my experience, when the technical design gets too complex, only big changes are worth the effort--small changes are just not worth the amount of time it takes to make and QA the change. This leads to myriad small enhancements that get left by the wayside, and a UI that grows stale and stodgy.
So when your developer says "I can can implement your design, but I'll have to hack something together to make it work," you should hear, "This had better be a darned good design, because you may never get a chance to change it."