A few months back I had the opportunity to be a part of a project in which the goal was to design an app helping surgeons perform operations. The app was unique in that it adopted the rigid checks and procedures used in military aviation.
Behind this idea, there is a very interesting story of how the founders of Nodus Medical came up with combining the worlds of medicine and jet piloting. You can read all about it here. Nodus was not supposed to be just an app, but a whole digital assistant making the surgeon’s everyday life less stressful, starting from the stage of planning the operation, through performing it in the operation room, and supporting (or rather being a relief form) the paperwork that comes after.
So, what did being part of this project mean to me?
I simply love taking part in building products with a big mission, and helping save lives is certainly one. When you realize that, as a designer, you are the one to shape the way the product will work, how usable it will be, and what consequences it may bring if it isn’t - you get a great sense of both pressure and responsibility. This is why I treated this challenge with the utmost respect and diligence.
As described to us by the Product Owner, the Nodus Medical app was supposed to serve as a digital assistant to surgeons, supporting the efficiency of performing their surgeries and relieving them of unnecessary stress. It was also meant to improve communication in the surgical team and significantly reduce formal procedures and paperwork around surgeries.
Problems to solve
Thanks to the initial research done on the client’s side, we knew the key problems that needed to be addressed:
The cognitive load of remembering detailed procedures by surgeons, serious consequences in case of errors, and the surgeon’s stress around the process.
Did you know that up to 40% of post-surgical complications are avoidable?
You might wonder - do experienced surgeons really happen to forget sometimes one or two main steps during the surgery?
Well, I believe that it’s not about forgetting, but about allocating precious cognitive energy to recalling the next step of the procedure instead of being fully focused on the body part that is being operated on. Surgeons perform many different surgeries each day and medical knowledge advances all the time, causing the procedures to change.
Distributing information among the team members and efficient communication
Surgeons have their own way of organizing medical equipment, favourite instruments - each muscle memory is unique. It’s important that the medical staff (who changes all the time from one surgery to another) supporting the surgery know how to prepare the operating room and when to pass each instrument. The knowledge about which step comes after which is highly helpful in that process.
Time-consuming administrative tasks and paperwork - mandatory post-surgery reports
Each surgery needs to be followed up by proper documentation. A detailed report about the course of treatment, the methods used, and patient care recommendations need to be recorded. It takes a lot of precious time for both the surgeon, who needs to dictate the report, and the medical nurse who needs to transcribe it.
Restrictions of the specific context of the operating room
This was by far the most interesting of all the challenges in this project.
Why not just use a regular iPad app? Obviously, the surgeon’s hands are busy during the surgery. This makes it impossible to use a touch screen in a traditional way, not to mention that the environment is sterile and the surgeon’s full attention needs to be directed towards the operating area.
The lighting conditions during different surgeries may vary and reading distance has to be taken into consideration as well as other medical equipment which might interfere with the target device.
Not to mention the fact that it would be convenient for the equipment to be set up in no time so that it didn’t cause any additional hassle during the necessary medical setup of the surgery.
Apart from all those issues, other factors had to be taken into consideration, such as security and anonymity of the patient data as well as stability and efficiency of the software. A scenario where the app crashes during surgery was simply unacceptable.
So, as you can see, there certainly was a lot on the designer’s plate. How do you tackle such a challenge? How do you even start?
Well, the answer to this is the right process tailored to the stage of the product development and to what’s already in front of you.
We had a lot of data delivered to the team by the product owner on Nodus Medical’s side. A great part of research had already been done, and we were able to use the expertise of a surgeon available to us throughout the whole project, so what we started with was a synthesis of the data to form conclusions, which were then crystallized into the product concept.
In practice, we based it on the result of stakeholder interviews, personas, user journey maps, competition analysis, some interface benchmarks, and observations drawn from initial prototype testing.
It’s not every day that a client is aware of the value of the right approach and how defining the basics - who the product is for, what are the target users needs and problems and what context they operate (no pun intended) in - can influence the product and its further success in the market. Not to mention patient care.
In Nodus Medical’s case we were lucky to have a client who believes in such an approach and it made me more confident in proceeding with the project knowing that we were building the product on solid foundations.
Searching for answers
We knew that choosing the right type of interface would determine the shape of the product and both the design and development phases.
After the initial data synthesis we knew what the main features of the app should be and that the best medium for the app during surgery would be an iPad placed next to patient body function monitors. That’s why we did another round of research, this time a more technical one, to establish what options we had to control the interface using input methods other than touch.
To me, this was the most fun part of the research, but also the most difficult one. It involved creativity in searching for sources of state-of-the-art devices and weighing the pros and cons of each of the options, sometimes with scarce evidence of the device’s success in various contexts.
The options we were considering were:
External button controls, e.g. foot pedal, hand remote control
A foot pedal was a device that had already been tested by some surgeons. There were quite a few disadvantages to it though, such as complicated setup or the confusion between which button on the pedal evokes a particular action.
All in all it added to the cognitive load which was supposed to be reduced in the final product. The case with a hand remote control seemed similar, but additionally raised the question of the surgeon’s hands being occupied with medical instruments. We had to look further.
Remote gesture interface
The necessity to keep the surgeon’s hands free brought us to another device. The idea with gesture control would be to use a device which enables connecting the interface to a band which is worn on the user’s arm and, thanks to muscle movement detection, enables controlling the interface.
There were some questions around this one - could the device be kept sterile? Would the calibration be reliable enough? Could it be seamlessly integrated with the iPad?
Famous thanks to both gaming and user research, we thought of an eye-tracker.
The use of an eye-tracker makes it possible to control an interface by focusing one’s gaze on certain UI controls. The device needs to be calibrated to the user’s eyes and works within a distance up to 1 m. This sounded more promising, but still not perfect.
Again, the question was if the eye-tracker could be kept sterile and also whether it would not interfere with other operating room equipment. We also had to keep in mind the possible problems with performance outside the distance range of the device.
A brainwave interface requires a device in the form of a headband which catches and recognizes the type of brainwave activity of the user.
It enables controlling the connected device interface by focusing attention on UI controls and blinking to perform actions. The brainwave controller can be connected to the iPad by using Bluetooth.
Although it brought to mind an SF movie, apart from the problem of being kept sterile, it looked quite promising. Given that surgical equipment does come in the form of headsets, it would seem quite natural to the surgeons.
We considered a voice interface for the scenario where the app would read aloud the steps of the operation displayed on the iPad screen during the surgery, but also for one where the surgeon’s speech would be recognized and actions on the iPad would be performed based on the meaning of the phrase.
We were looking at a customizable wake-up word for the interface, speech recognition, and possibly natural language processing for open commands.
We also had to bear in mind that a relevant technological approach needed to be determined for poor internet connection scenarios. We had to explore how the variety of accents would influence the interface on giving speech commands and keep in mind the noise disturbance in the operating room due to face masks and medical conversations being conducted.
And the winner is...
In the end, though definitely a challenge from a technological standpoint, the voice interface option was given a go as the most natural one for the users and leaving the most potential for further development of the product.
It was certainly not the end of the design challenge.
The iPad alone needed a sterile sleeve to be kept in the operating room, but luckily it turned out that it wouldn’t obstruct the use of the app. We had the precious Human Interface Guidelines to lean upon, but even the advised font sizes and styles had to be reconsidered because of the unusual distance of the iPad from the surgeon using the app with a patient in between.
We needed to decide if we were going with a light or a dark mode for the app for the operating room where lighting conditions could be changing.
We kept working through these challenges and tackling them one by one. We had a strong foundation of well-conducted user research and a technical background taking into consideration the specifics of the context and the users.
The project wasn’t easy - it was filled with the awareness of the gravity of each decision and no straight answers. But we all worked hard for it to be well-structured, precisely described at all stages, and thought through. This is how we kept it all so exciting and exceptionally fulfilling. You can see the results here.