THE #1 AV NEWS PUBLICATION. PERIOD.

Looking At the Medical World #ThroughGlass

With all that we presently know in terms of AV, IT and Internet of Things (IoT), I truly believe we are expanding into a full technology realm of elaborate innovation and boundless potential. As I’ve previously written on IoT and innovation, the components of technology change and growth are appearing before our eyes on a regular basis, almost daily. A world unto its own exists in a certain realm of technology that while many consider it to still be in a state of germination and maybe even flux (e.g., referring to Internet security), others see it as more in terms of myriad cutting-edge possibilities. In fact, I am now a member of that technology world as I recently purchased a Samsung Gear 2 Neo watch which syncs with numerous Galaxy devices (like my Galaxy S4 smartphone) and runs on Tizen open-source OSa Samsung-created operating system, rather than Android. The sync allows me to read messages, emails, Tweets and more on the watch and if I don’t want to reach for my smartphone, my watch becomes my phone. I’ve actually had fairly lengthy conversations on it, and all sounds like I’m actually talking on the phone. Dick Tracy lives again?

Going beyond the scope of “smart things” though is one of the most, if not the most popular and disruptive technologies in the market of wearables, and that’s Google Glassa type of wearable technology with an optical head-mounted display (OHMD) developed by Google with the mission of producing a mass-market ubiquitous computer (full Wikipedia description here).

An article How Does Google Glass Work? explains the form and function of Glass and context:

Powered by voice control — so no keyboards — Google Glass overlay the world you see around you with related information beamed onto your retina by a prism that receives from a tiny projector inside the lens. You see both the physical world and all relevant data associated with it… With Google Glasses, the technology disappears from in front of you and you get data and applications in the context of what you’re doing or what you’re looking at. Want to know the weather right now? You won’t have to find the weather app and click on it to get a report. Weather apps for Google Glass will know when you’re looking up at the clouds and provide you with an instant weather report. 

Glass-and-eye-0914

Google Glass’ popularity continues to grow as numerous applications in various markets, such as education, are continuously being examined and in certain instances implemented. In the field of healthcare, a main goal is Glass usage in the operating room and this blog will focus on this method of usage. For this blog, my interview participant is John Scott, a Glass developer and the CEO of a progressive medical industry company Context Aware Computing Corporation with their product ContextSurgery are exploring and defining targeted usage of Glass in surgical procedures. However, before getting to the interview, it’s important to describe Context Awareness as it applies to application in health care (from Wikipedia):

Context-aware mobile agents are a best suited host implementing any context-aware applications. Modern integrated voice and data communications equips the hospital staff with smart phones to communicate vocally with each other, but preferably to look up the next task to be executed and to capture the next report to be noted.

However, all attempts to support staff with such approaches are hampered till failure of acceptance with the need to look up upon a new event for patient identities, order lists and work schedules. Hence a well suited solution has to get rid of such manual interaction with a tiny screen and therefore serves the user with:

  • automated identifying actual patient and local environment upon approach,
  • automated recording the events with coming to and leaving off the actual patient,
  • automated presentation of the orders or service due on the current location and with
  • supported documenting the required information keying in a minimum of data into prepared form entries.

ContextSurgery-logo-0914

To give us further information on his company, Glass experiences, usage in the medical industry and more, here is John Scott the CEO of the company that develops ContextSurgery.

CM: Thank you for participating in this highly informative and enlightening interview John, please tell us something about yourself, your initial experiences with technology and Glass and your company that develops ContextSurgery.

JS:  Right from the start I was a young hardware hacker, a maker, a doer. My insatiable curiosity and sense of adventure combined with an explorers heart drove me to innovate with, just about anything I could get my hands on, from spare electronics to Erector sets. At age 14 I discovered computing machines; I never looked back, writing my first line of code on a home built microprocessor system and then developing five generations until reaching out to the guys at Apple, to see if I could get their operating system to run on my home built computer. The engineers at Apple did ask their management, but they reported back that the company had decided to not offer the software separate from the hardware, the sum of which would be called Macintosh.

By age 16, I had been selected for an apprenticeship as an electronic technician and was handed my first minicomputer to repair. Before turning me loose to troubleshoot the problem with the minicomputer, my mentor gave me a few clues and then left for the day, telling me not to stay too late. Well that was all I needed, armed with an electronics repair shop filled with spare parts, some of the most advanced electronics test equipment I had ever seen, I  went to work. The use of the digital logic analyzer helped, but in the end it was a simple digital multimeter and an oscilloscope that allowed me to identify the problem. The culprit was a burned out diode, which I quickly replaced. Standing back and saying a quick prayer, I powered on the very expensive computing machine.

To my delight, all the lights on the front panel twinkled and the status showed green, as the minicomputer hummed to life. After repairing the input/output card, I completely ignored my supervisor’s advice about not staying too late, and I decided to stay all night at the electronics shop bench and learn how to program this newly repaired computing machine with nothing but a 3-inch thick programmers reference manual and the front panel input switches to load my program. By the morning I had loaded the machine language code into the computer and wheeled it into the supervisor’s office on a cart, where I unplugged the display on the supervisor’s desk and plugged it into the repaired minicomputer. I collapsed on the bosses couch and fell asleep. The next morning I was woken up by an incredulous manager asking what I and the minicomputer were doing in his office. And why the display on his desk was disconnected and plugged into the minicomputer on the cart. When he finally hit the enter key on his keyboard, as the yellow post-it note instructed, the supervisor jumped back and gasped. To his amazement, displayed in front of him was one simple sentence: ‘Good morning Hal’.”

In 2010, I was consulting at the North American research and development headquarters of a global 500 technology company, when I had a moment of clarity during a discussion with a colleague about situational aware computing and a nascent field of study called context aware computing. That year, as a way of learning more about the subject, I began work on a research paper. While the work was never published, it set me on a course of discovery that collided with the introduction of wearable technologies. These wearable devices are stuffed with sensors and location-enable technology, with motion and proximity capabilities that could be used to form context for the person wearing them. ContextSurgery was born by realizing the benefit of combining these devices with a context aware computing platform that could provide surgeons with the right information in the right place at the right time.

My experience with Google Glass actually began before I acquired my first test device. Months beforehand, based on the specifications of the device, I began designing context aware computing solutions around it’s capabilities. In fact, I designed several platforms based on the eventual emergence of wearable devices, that would one day be available. I knew that if adequate sensors and proximity detection capabilities where to be introduced, whole new categories of use cases would become possible. 

CM: Your company promotes context aware augmented reality for surgeons. Can you elaborate on this?

JSContextSurgery offers a specific selection of available visualizations and reality augmentations that suit the medical professional’s needs best. From critical information such as patient vital signs, pertinent drug dosage, test results, and intraoperative imaging to patient record and case history data from the EHR/EMR, all of this vital information is pushed to Google Glass at the right time and in the right place as unintrusively as possible. We provide advanced context-aware technology for the forward leaning surgeon.

Augmented Reality (AR) and Virtual Reality (VR) allow us to combine elements of the physical world with virtual elements representing imagery and information to guide and inform us. This enriches our ability to perform tasks that transcend our current capabilities and push the boundaries of science and technology. In the medical field, various AR systems have recently been proposed: systems for education, pre-planning, and those in the operating room. ContextSurgery is leveraging augmented reality and mixed reality to allow surgeons to perform procedures never before possible. At the same time these technologies will enable surgeons and physicians to reduce medical errors, drive down costs by working more efficiently and ultimately increase the quality of healthcare. 

We believe that many exciting research activities being conducted today which augment intraoperative images with virtual models of instruments and key targets to guide surgical procedures, represent the future. And within three to five years will enable surgeons to use smart glass wearable technology like Google Glass to view augmented reality images as they are performing life-saving surgeries.

(Note: Here is a video showing an example of intraoperative imaging – Zurich University Hospital)

CM: Can you tell us about your team and their contributions to the company and the medical industry?

JSOur team consists of information technology and medical industry veterans, each of our key team members, including myself, have extensive backgrounds with decades of experience.

As company CEO, I have 35 years of software development, middleware and management experience. 

The company’s medical expert and Chief Medical Officer, David Martineau, is a fellowship trained orthopedic hand surgeon and wearables early adopter. David collaborates with Context Aware Computing Corp.’s development team and helps define and drive their beta test program. This program imbeds surgeons in ContextSurgery’s development lifecycle, defining use cases and acceptance criteria for software testing. Beyond the company’s initial product launch it is anticipated that David will play an important role in the development of Context Aware’ Computing Corp.’s medical product line through his vision and commitment to building advanced context-aware augmented reality technology for the forward-leaning surgeons and physicians. David Martineau and I met through our mutual interest, and early adoption of, wearable technologies and smart glasses. We both share a common belief and passion that the future of these technologies hinges on push-oriented context aware computing platforms which can provide the right information in the right place at the right time.

(Note: According to John, here is an introduction David wrote for the Context Aware team when they started the company and highly important in its development):

“My name is David Martineau. I am a board certified fellowship trained orthopedic hand surgeon. What that means is I’ve spent a LONG time training for my specialty (10 consecutive years following the completion of my bachelor’s in Biology)! Training took me through the University of Cincinnati College of Medicine where I earned my medical doctorate followed by five years “hard time” in Flint Michigan where I completed my orthopedic surgical residency and lastly a 1 year hand and microsurgery fellowship in Louisville, Kentucky. During these years I worked on several research projects and initiatives from cost-analysis to microbiology to biomechanics. Interests included biomechanics of implants and prostheses including a DoD funded study looking and bone healing in open tibia fractures…

I spent the first two years practicing hand surgery and general orthopedics in Phoenix Arizona where I quickly became involved in this large orthopedic group’s electronic medical record (also used CERNER while I was there). I headed up the EMR division working with programmers, HR, quality and operations personnel. Our focus was on data collection, quality metrics, improvement in the delivery of healthcare AND tracking the improvements. We worked on automating these processes which tracked every aspect of the patient encounter from the time a patient calls to schedule an appointment until their final follow up survey was completed upon their discharge from care.

Ultimately, the distance from our extended families was too great for my wife and daughter, so we moved back to the Cincinnati-Dayton Ohio area where I am currently working at Orthopedic Associates of SW Ohio. It’s been a great opportunity thus far as we run an orthopedic residency training program, along with our own hand fellowship. With encouragement from my partners, I have been able to devote time to advancing technology in healthcare. This has led to my involvement with Google Glass as an explorer (two of my partners followed suit). I helped develop the world’s first pair of prescription lenses with surgical telescopes (see below) designed specifically for Google Glass. This has enabled me to use Google Glass for hand surgery which requires “loupe” magnification to work on structures often a millimeter in size. I am also on the board of the international society for wearable technology in healthcare.

To say I am excited to be a part of this team is a gross understatement! I am looking forward to working with everybody [as a member of the Context Aware team].”

David-Martineau---Google-Glass-+-Rx-Lenses-+-Design-for-Vision-Loops-0914

The company’s CTO, Eric Redmond, is one of the world’s leading experts in wearable technology with a background in healthcare software. Eric is a programmer, international speaker, and author of the first book on programming Google Glass. Eric Redmond and John Scott were introduced by the CEO of a company Context Aware partners with. Eric had worked with the CEO of this development partner, and came highly recommend because of his pioneering adoption of smart glass and wearable technologies, combined with his big data experience, technical leadership and exceptional customer facing skills. Eric also has prior startup experience having founded a development tools business.

CM: Your website refers to the Google Glass “Surgical Dashboard” which can be customized to suit a surgeon’s needs where patient data and surgical procedures can be pushed to Google Glass, hands-free. Can you explain this application and how it benefits the surgical process?

JSAugmented Reality is a promising paradigm for intraoperative assistance. A major obstacle to its clinical application is the man-machine interaction. Visualization of unnecessary, obsolete or redundant information may cause confusion and distraction, reducing usefulness and acceptance. Context-aware software automatically filters information.

We provide a context aware computing platform that proactively provides a surgeon with the right information, in the right place, at the right time, enabling them to increase the quality of patient care, reduce medical errors, and increase their efficiency. From critical information such as patient vital signs, pertinent drug dosage, test results, and intraoperative imaging to patient record and case history data from the patient record systems, all of this vital information is pushed to Google Glass allowing the surgeon to stay focused on the procedure he or she is performing while reducing medical errors. Further the surgeon is guided through surgical checklists and when completed a record that the checklist was successfully complete is recorded in the electronic health record system in case it is needed to defend against malpractice law suits.

glass-dashboard-0914

(Note: Watch the video at the end of this blog to see a simulated procedure performed with the surgical dashboard by Dr. Martineau).

CM: Can you tell us how you believe the medical field will benefit from the usage of Google Glass over the next five years?

JS: In the United States there are over 6000 hospitals and surgical centers, over 700,000 physicians and over 140,000 surgeons. Hospitals, surgeons and doctors are continually seeking ways of increasing the quality of patient care, being more efficient, reducing medical errors, aiding in the reduction of malpractice lawsuits. Collaborating through innovation programs and individual collaboration with surgeons and physicians, we can redefine the way surgeons do their job.

John thank you for participating in this important blog on the advancement of high level technology in the medical industry. I would just like to add some further information here at the end for all of the readers:

WATCH Society: The Future of Wearable Tech in Healthcare

The WeArable TeChnology in Healthcare (WATCH) Society was founded by a group of early adopters of wearable technology in healthcare. Their mission is to bring healthcare professionals, developers, and hardware manufacturers of wearable technology together to form a melting pot of innovation and collaboration with the goal of leading the world into the future of healthcare delivery.

WATCH Conference 2014: Context Aware Surgery Demo

(Note: Context Aware surgery with Dr. David Martineau who demonstrates the Context Surgery platform for Google Glass by Context Aware Computing Corp.Context-aware Augmented Reality for Surgeons. It was recorded at the WATCH Conference).

WATCH Conference 2014: Presentation by John Scott

(Note: John Scott, Founder and CEO presents ContextSurgery by Context Aware Computing Corporation: The right information in the right place at the right time. This is a video reduced in time to 13 minutes if you would like to view it).

Here is a short video provided by Dr. Martineau on Google+, with this description: Unspecified roller coaster at an unspecified amusement park…#throughglass it was the first ride of the day and the ONLY one I was allowed to wear Glass on.

On Google+John Scott, David Martineau, Eric RedmondContextSurgery, Context Aware

On Twitter John Scott, David Martineau, ContextSurgery

On LinkedIn Context Aware Computing Corp.

Top