Wednesday, December 29, 2010

A Secure Transport Strawman

 by John Halamka, Life as a Healthcare CIO

 Over the past few years, I've posted many blogs about the importance of transport standards.   Once a transport standard is widely adopted, content will seamlessly flow per Metcalfe's law.   We already have good content standards from X12, HL7, ASTM, and NCPDP.  We already have good vocabulary standards from NLM, Regenstrief, IHTSDO and others.   We have the beginnings of transport standards from CAQH, IHE, and W3C.   We have the work of the NHIN Direct Project (now called the Direct Project).

After working with Dixie Baker/the HIT Standards Committee's Privacy and Security Workgroup on the Direct evaluation and after many email conversations with Arien Malec, I can now offer a strawman plan for transport standards.

Based on the implementation guides currently available, the HIT Standards Committee evaluation found the SMTP/SMIME exchange defined by the Direct Project sufficiently simple, direct, scalable, and secure, but stressed the need to develop implementation guidance that is clear and unambiguous.   I've received many emails and blog comments about SMTP/SMIME verses other approaches.  I believe I can harmonize everything I've heard into a single path forward.


As with all HIE efforts, policy has to constrain technology. The policy guidance that the Direct Project was given was as follows:

A "little guy" such as a 2 doctor practice in rural America wants to send content to another 2 doctor practice across town.   These small practices should not have to operate servers or have to pay for a complex health information exchange infrastructure.   Healthcare Information Services Providers (HISPs) should provide them the means to exchange data as easily as Google provides Gmail or Verizon FIOS provides ISP service.   All HISP to HISP communications should be encrypted such that the sending practice and receiving practice can exchange data without any HISP in the middle being able to view the contents of the data exchanged.

In my opinion, for this type of exchange

Small Practice 1 ---> HISP 1 ----> HISP 2 ----> Small Practice 2

SMTP/SMIME at an organizational level is the right transport solution.   By organizational level, I mean that one certificate is used for the sending organization and one for the receiving organization.   There is no need to issue certificates to individual people involved in the exchange.

SMTP/SMIME at an organizational level encrypts, integrity protects, and digitally signs the payload at the point where the message is created.  The payload can be sent through multiple intermediaries to the receiver with assurance that the message will be readable only by the intended receiver.

Given the policy guidance to support the little guy, any practice in the country that wants to send any content securely to any other practice without risk of viewing by any intermediary, SMTP/SMIME is sufficient and appropriate.

For other types of exchanges with different policy constraints, TLS is more flexible and functional.   In Massachusetts, NEHEN is a federated HIE, enabled by placing open source software within the networks of each participating institution.    Server to Server communication is a SOAP exchange over TLS.   In this case, the HISP resides within the firewall of each participating payer or provider organization.   TLS enables simple, secure transmission from one organization to another.   TLS does not require a certificate store.  TLS enables REST, SOAP, or SMTP transactions to flow securely because the connection between organizations is encrypted.

Where TLS falls down is in the Direct use case with its policy requirements that no intermediaries between the sender and receiver may have access to unencrypted data.  This excludes the case in which the sender uses a HISP as a business associate to package the data as an SMIME message.  A sender has no way of knowing what intermediaries the information may flow through, so implementing secured message flows from any sender to any receiver using TLS is untenable.

Thus, our path forward is clear.  If we impose a policy constraint that small organizations which use external HISPs should be able to send data securely to other small organizations which use external HISPs such that the HISPs cannot view the data, then SMTP/SMIME with some mechanism to discover the certificates of each organization is the right answer at this time.

If the use case is simpler - secure exchange between HISPs such that the HISPs reside within the trading partner organizations or a relaxation of the policy constraint that HISPs cannot view data, then TLS is the right answer at this time.

The next steps are also clear.   Establish SMTP/SMIME as a standard, and secure email using SMTP/SMIME as a certification criteria for EHR technology.  Establish standards for X.509 certificates for organization-to-organization exchanges, as suggested by the Privacy and Security Tiger Team.

There you have it - the solution to the transport standards issue for the country - SMTP/SMIME for little guys using external HISPs and TLS for other use cases.

Done! Now it's time to implement and foster adoption.

Tuesday, December 28, 2010

Prepping for the ACO announcement

What to do while we're waiting for CMS' proposed rule on accountable care organizations? Why not read what everyone else is saying about ACO?

Accountable Care Organizations — The Fork in the Road by Thomas L. Greaney, J.D.

Accountable Care Organizations - the week's top links by Rich Elmore

Accountable Care Organizations - top links by Rich Elmore

A Model For Integrating Independent Physicians Into Accountable Care Organizations by Mark Shields, Pankaj Patel, Martin Manning and Lee Sacks

Berwick: Some claim ACO status without truly changing by Sandra Yin

Accountable Care Organizations: Health IT's Critical Role by Bruce Fried
CMS ramps up for ACOs, seeks physician input by Diana Manos

Designing a health IT backbone for ACO's by PWC Health Research Institute

Accountable Care Organizations Hold Promise But Will They Achieve Cost and Quality Targets? by Peter Boland, Phil Polakoff and Ted Schwab

6 points every physician should consider about Medicare ACOs by Asa Lockhart

Accountable Care Organizations in California - Lessons for the National Debate on Delivery System Reform by James Robinson and Emma Dolan

CMS Faces Divisions Within the Agency on its Rules for Accountable Care Organizations by Judy Packer-Tursman


ACOs: Is this the next big thing or not? by Warren Skea and Brett Hickman


Patients’ Role in Accountable Care Organizations by Anna Sinaiko and Meredith Rosenthal


Physicians Need to Lead Accountable Care Organizations to Protect Profession by Robert Lowes

An Accountable Care Organization Reading List by Chris Fleming

Health Policy Brief - Accountable Care Organizations by Health Affairs and Robert Wood Johnson Foundation

Which Payment System is Best

10-5-10ACO-WorkshopPMSessionTranscript

Friday, December 24, 2010

Rock Stars of Health Information Technology 2010

 Starring John Halamka, David Blumenthal, Glen Tullman, Bill Spooner and Wes Rishel


Thursday, December 23, 2010

S/MIME and TLS - The #DirectProject

 by Arien Malec
John Halamka today posted on a topic that has been a huge item of historical discussion in the Direct Project: whether we can enable simple, direct, scalable and secure transport in support of meaningful use based just on TLS.
Our conclusion was that you couldn't, at least not simply, and we felt that S/MIME was the natural proven fallback.
To explain why, I'm going to go moderately deep on bits and bytes in a rather long post. I apologize in advance. If you want to get to the meat, go to the paragraph that starts "so why is this a problem?"
First, I want to distinguish server authenticated TLS from client (or mutually authenticated) TLS. Server authenticated TLS is the form you are likely most familiar with. You connect to a website using SSL (https), and your browser's lock turns green (in some sites that use Extended Validation (EV) certificates, your browser's lock turns green with the name of the organization you are connecting to). Your browser is checking:
  • That you have a strong SSL encrypted connection
  • That the certificate that the server is using for the connection was actually assigned to the server
  • That the certificate was issued by a CA that your browser trusts
This gives you, the shopper, assurance that the channel is encrypted and that the server is not spoofing the organization it pretends to be (for example, ensures that when you shop at Amazon, you aren't actually shopping at BadHacker.com, who is syphoning off your credit card and CCV number and not shipping you any books).
Your browser has a list of root certificates (for example, this is the list of roots approved by the Mozilla Foundation for Firefox) that it trusts. As I mentioned above, there are two kinds of trusted SSL certificates in common use: the ordinary kind, that verify that the certificate holder actually controls the domain in question, and the EV kind, that verify that the certificate holder is the actual legal entity it purports to be.
So far, so good. This approach works well, and promotes a good ecosystem of certificate issuers who compete on cost without lowering overall quality.
The only problem is that this approach authenticates the server, but not the client. In the context of information exchange, it would give assurance that the receiver of information was the individual or organization it purported to be, but would give no assurance whatsoever about the sender. Oops.
If you want to authenticate the client and the server in an TLS-encrypted channel, you can use mutual authentication. In this mode, the server presents its certificate and encrypts the channel, as before, and then requests that the client present its certificate. If the client trusts the server and the server trusts the client, the transaction can proceed. Brilliant!
Here was the problem we ran into: in the TLS negotiation process, the server presents one and only one certificate path. In typical implementations*, the server also verifies the client certificate against one and only CA root. That means for every pair of client-server connections, the server and the client have to have the same root CA.
So why is this a problem? Well, if everyone lives in the same exchange, it's not a problem. The exchange hands client certificates to everyone, and everything works. If people use different exchanges, it falls apart. Consider a typical practicing physician:
  • She receives her exchange connectivity through StateHUB, her state HIE provider
  • The cardiologist she typically refers to has an EHR provided by GreatEHRCo, which also runs exchange connectivity
  • She refers to two hospitals, GeneralIDN, and MemorialIDN, one of whom uses VeronaSoft as its EHR and exchange capability, the other of whom uses KCTech as its EHR and exchange capability
  • She has some patients who are veterans, and she needs to exchange data with the VA
  • She uses two national reference labs, LaboratoryCo and Journey, each of whom run lab specific exchanges
This is not a crazy situation; in fact, the real world will likely have more exchanges in the mix (many EHR vendors are building exchange capabilities into their products). Based on the way TLS currently works, for this model of real world exchange to work, every exchange would have to use the same CA. If StateHISP got its certificate from Verizon, GreatEHRCo got its certificate from Verisign, and GeneralIDN got its certificate from Thawte, they couldn't connect based on the mutual-authenticated TLS of today.
If you take anything away from this long post, take this away: using TLS with mutual authentication would require central coordination and configuration. That is, unlike the browser bundle I mentioned above, which provides a market-based approach where anyone can stand up a CA, and, assuming they pass validation by WebTrust or a similar credentialing organization, can play on an equal footing with the established CAs, in the single rooted world, there would be only one master CA. Assuming ONC provided that single master CA root or bundle, getting the operational and governance mechanisms to make that work, and rolling it out nationwide, and enabling a robust market for identity assurance would take us a ways out in the future.
In addition, authentication in this approach is explicitly machine to machine, not organization to organization. To get organization to organization authentication, the SNI extension would be needed because TLS servers provide a single server certificate for the IP address on which the server runs. In cases where the same machine is used for multiple organizations (e.g., an HIE or a HISP), SNI allows virtual hosts to be used on the same IP address. Unfortunately, SNI is not well supported across web servers or SMTP servers. Without SNI, authentication is exchange to exchange, not organization to organization, which bumps the policy requirement for identity assurance to the intermediary.
There are inventive ways around this problem, by using TLS extensions like SNI** and agressive (and by agressive, I mean agressive) use of DNS RRs like SRV and NAPTR and CERT to allow discovery of the certificate in use by a particular exchange participant, or using server-auth TLS and client signed Oauth bearer tokens but it ends up being pretty knotty and uses a technology stack that is not well supported. We tried. We really wanted TLS to work.
S/MIME, by contrast, works today, is well supported, has multiple interoperable implementations, works on the PKI infrastructure that is in common use today, is well described by IETF and is forward compatible to a day where we have authentication and digital signatures at the individual provider level. That's why we went that direction in the Direct Project, and I feel just as good about that decision today as when we made it.
`* The TLS spec supports multiple CAs for client authentication but only one cert chain for the server. Multiple client CAs are supported in Apache and IIS, but not in Nginx.

Affirming Flexibility…With Certified EHR Systems

 by David Blumenthal

Today on our FAQ page, we are posting a revised Question and Answer regarding an issue that has recently caused confusion in our meaningful use regulations: namely, the flexibility that providers have to defer performance on some Stage 1 meaningful use objectives; and how that squares with the requirement that providers must nonetheless possess fully-certified EHR systems.

The new FAQ is meant to clarify this two-part requirement. But we should make it equally clear that our policy has not changed:

As stated in our final regulations, providers are given the flexibility to defer as many as five “menu set” objectives during Stage 1 and still achieve meaningful use. That means providers have flexibility to stage their adoption and implementation of EHRs in sync with their plans to defer certain menu set objectives.
But as also stated in our final regulations, we require EHR systems themselves be certified against all criteria adopted by the Secretary. So even though a provider has the option of deferring some objectives during Stage 1, the EHR system in the provider’s possession must be certified against all functions. Possession means having a legal right to access and use, at the provider’s discretion, all of the Stage 1 functions of a fully-certified system – but it does not imply that the provider must fully implement every one of these functions.
To understand this two-part approach, we need to look back to the development of the meaningful use regulations. From the beginning, this process was aimed at achieving the right balance – a balance between the need to achieve effective and rapid adoption of EHRs throughout the United States; and at the same time to be realistic about the challenges facing providers on the road to meaningful use.

In our final regulations, I believe HHS achieved the needed balance:

We identified the objectives that constituted meaningful use in Stage 1. These objectives are part of a coherent, longer-range plan for EHR adoption and meaningful use. We will build on these objectives as we graduate through Stage 2 and 3 of the transition process.
But at the same time, for these initial years, we recognized the challenge that this transition will pose to providers. For that reason, we gave providers flexibility in their own “staging” choices, permitting them to defer performing on as many as five of the 10 “menu set” objectives. This guarantee of flexibility, provided in our final regulations, has not been changed.
Why did we require that EHRs be certified as capable of meeting all of the certification criteria for meaningful use, even though we allowed flexibility concerning which criteria providers actually had to meet? There were several reasons.

First, our regulation stated that in Stage 2 of Meaningful Use, we will require that providers meet all the requirements laid out in Stage 1, including all 10 of objectives on the options menu. Having records capable of meeting all 10 objectives allows providers to get a head start on Stage 2 of meaningful use.

Second, we expect that some providers may try and fail to meet meaningful use objectives on one or more of the menu criteria. If their records are not capable of meeting the other optional objectives, they may be unable to obtain and implement the capabilities they lack in time to qualify for meaningful use. Thus, the requirement that certified EHRs possess the capability to meet all requirements actually gives providers the flexibility to experiment with multiple approaches to meeting meaningful use– and guarantees that if they fall short, they will not be left high and dry. This flexibility is only possible when the provider has access to certified technology for all Stage 1 functions.

The details of these requirements can be found in the new FAQ , and I invite you to read and comment. I hope it will be clear that these two elements are not in conflict, but rather represent the balance that has characterized the evolution of the meaningful use process. Finally, I hope it will be clear that there has been no change in the guarantee of provider flexibility during Stage 1.

To achieve EHR-based health care, we need to build a strong technology foundation. But at the same time, we need to recognize that providers have varying circumstances and different needs, and we seek to accommodate those differences as we support the transition to EHRs. In that spirit, we are delivering on the promise in our final regulations to give providers the flexibility they require to succeed in adoption and meaningful use of EHRs.

Tuesday, December 21, 2010

Why medical testing is never a simple decision

by Marya Zilberberg, Healthcare, etc.


A couple of days ago, Archives of Internal Medicine published a case report online. Now, it is rather unusual for a high impact journal to publish even a case series, let alone a case report. Yet this was done in the vein of highlighting their theme of "less is more" in medicine. This motif was announced by Rita Redberg many months ago, when she solicited papers to shed light on the potential harms that we perpetrate in healthcare with errors of commission.

The case in question is one of a middle-aged woman presenting to the emergency room with vague symptoms of chest pain. Although from reading the paper it becomes clear that the pain is highly unlikely to represent heart disease, the doctors caring for the patient elect to do a non-invasive CT angiography test, just to "reassure" the patient, as the authors put it. Well, lo' and behold, the test comes back positive, the woman goes for an invasive cardiac catheterization, where, though no disease is found, she suffers a very rare but devastating tear of one of the arteries in her heart. As you can imagine, she gets very ill, requires a bypass surgery and ultimately an urgent heart transplant. Yup, from healthy to a heart transplant patient in just a few weeks. Nice, huh?

The case illustrates the pitfalls of getting a seemingly innocuous test for what appears to be a humanistic reason -- patient reassurance. Yet, look at the tsunami of harm that followed this one decision. But what is done is done. The big question is, can cases like this be prevented in the future? And if so, how? I will submit to you that Bayesian approaches to testing can and should reduce such complications. Here is how.

First, what is Bayesian thinking? Bayesian thinking, formalized mathematically through Bayes theorem, refers to taking the probability of disease being there into account when interpreting subsequent test results. What does this mean? Well, let us take the much embattled example of mammography and put some numbers to the probabilities. Let us assume that an otherwise healthy woman between 40 and 50 years of age has a 1% chance of developing breast cancer (that is 1 out of every 100 such women, or 100 out of 10,000 undergoing screening). Now, let's say that a screening mammogram is able to pick up 80% of all cancers that are actually there (true positives), meaning that 20% go unnoticed by this technology. So, among the 100 women with actual breast cancer of the 10,000 women screened, 80 will be diagnosed as having cancer, while 20 will be missed. OK so far? Let's go on. Let us also assume that, in a certain fraction of the screenings, mammography will merely imagine that a cancer is present, when in fact there is no cancer. Let us say that this happens about 10% of the time. So, going back to the 10,000 women we are screening, of 9,900 who do NOT have cancer (remember that only 100 can have a true cancer), 10%, or 990 individuals, will still be diagnosed as having cancer. So, tallying up all of the positive mammograms, we are now faced with 1,070 women diagnosed with breast cancer. But of course, of these women only 80 actually have the cancer, so what's the deal? Well, we have arrived at the very important idea of the value of a positive test: this roughly tells us how sure we should be that a positive test actually means that the disease is present. It is a simple ratio of the real positives (true positives, in our case the 80 women with true cancer) and all of the positives obtained with the test (in our case 1,070). This is called positive predictive value of a test, and in our mammography example for women between ages of 40 and 50 it turns out to be 7.5%. So, what this means is that over 90% of the positive mammograms in this population will turn out to be false positives.

Now, let us look at the flip side of this equation, or the value of a negative test. Of the 8,930 negative mammograms, only 20 will be false negatives (remember that in our case mammography will only pick up 80 out of 100 true cancers). This means that the other 8,910 negative results are true negatives, making the value of a negative test, or negative predictive value, 8,910/8,930 = 99.8%, or just fantastic! So, if the test is negative, we can be pretty darn sure that there is no cancer. However, if the test is positive, while cancer is present in 80 women, 900 others will undergo unnecessary further testing. And for every subsequent test a similar calculus applies, since all tests are fallible.

Let's do one more maneuver. Let's say that now we have a population of 10,000 women who have a 10% chance of having breast cancer (as is the case with an older population). The sensitivity and specificity of mammography do not change, yet the positive and negative predictive values do. So, among these 10,000 women, 1,000 are expected to have cancer, of which 800 will be picked up on mammography. Among the 9,000 without cancer, a mammogram will "find" a cancer in 900. So, the total positive mammograms add up to 1,700, of which nearly 50% are true positives (800/1,700 = 47.1%). Interestingly, the negative predictive value does not change a whole lot (8,100/[8,100 + 200]) = 97.6%, or still quite acceptably high). So, while among younger women at a lower risk for breast cancer, a positive mammogram indicates the presence of disease in only 8% of the cases, for older women it is about 50% correct.

These two examples illustrate how exquisitely sensitive an interpretation of any test result is to the pre-test probability that a patient has the disease. Applying this to the woman in the case report in the Archives, some back-of-the-napkin calculations based on the numbers in the report suggest that, while a negative CT angiogram would indeed have been reassuring, a positive one would only create confusion, as it, in fact, did.

To be sure, if we had a perfect test, or one that picked up disease 100% of the time when it was present and did not mislabel people without the disease as having it, we would not need to apply this type of Bayesian accounting. However, to the best of my knowledge, no such test exists in today's clinical practice. Therefore, engaging in explicit calculations of what results can be expected in a particular patient from a particular test before ordering such a test can save a lot of headaches, and perhaps even lives. In fact, I do hope that the developers of our new electronic medical environments are giving this serious thought, as these simple algorithms should be built into all decision support systems. Bayes theorem is an idea whose time has surely come.

Monday, December 20, 2010

The December HIT Standards Committee

by John Halamka, Life as a Healthcare CIO


The December HIT Standards Committee focused on a review of the President's Council of Advisors on Science and Technology (PCAST) report, a review of the Standards and Interoperability Framework Priorities, and a review of NHIN Direct (now called the Direct Project).  

We began the meeting with an introduction from Dr. Perlin in which he noted that reports by commissions such as PCAST need to be read, not for their details, but for their directionality.  We should ask about the trajectory the experts think we should be on and how/when should it modify our current course.   Dr. Blumenthal also offered an introduction to the PCAST discussion, noting that the White House fully supports and encourages interoperability, suggesting that we should accelerate the priority of healthcare information exchange in the progression from Meaningful Use stage 1 to 3.

We discussed the origins and history of the PCAST report.  The President asked PCAST how health IT could improve the quality of healthcare and reduce its cost, and whether existing Federal efforts in health IT are optimized for these goals. In response, PCAST formed a working group consisting of PCAST members and advisors in both healthcare and information technology.

The working group held meetings in Washington, D.C., on December 18, 2009, and in Irvine, California, on January 14 15, 2010, as well as additional meetings by teleconference. The viewpoints of researchers, policy analysts, and administrators from government, healthcare organizations, and universities were presented and discussed.

A draft report developed by the working group was submitted to the Health and Life Sciences committee of PCAST. That committee submitted the draft to several outside reviewers, who made valuable suggestions for improvements.  From the working group draft, the additional input, and its own discussions, the Health and Life Sciences committee produced the present report, which was discussed and endorsed (with some modifications) by the full PCAST in public session on July 16, 2010.

A disclaimer at beginning of report notes "Working Group members participated in the preparation of an initial draft of this report. They are not responsible for, nor necessarily endorse, the final version of this report as modified and approved by PCAST."

We identified a number of key themes in the report
1.  The foundation for healthcare information exchange should be built on an XML-based Universal Exchange Language
2.  Data elements should be separable from documents
3.  Metadata should identify characteristics of each data element i.e. how it was recorded, by whom and for what patient
4.  Privacy controls should integrate patient consent preferences with metadata about the data available for exchange
5.  Search engine technology/data element access service indexing at a national level will accelerate data element discovery
6.  Data reuse with patient consent for clinical trials and population health is a priority

The key ideas from the discussion included:
a.  Thinking at a national scale is good to avoid creating regional health information exchange silos
b.  Messaging (such as HL7 2.x)  is still going to be needed to support event-based transactional workflows
c.  The strength of the PCAST report is in supporting exchange models that require aggregation - research, epidemiology, and unanticipated interactions such as Emergency Department visits http://geekdoctor.blogspot.com/2010/09/unconscious-in-emergency-department.html
d.  For some uses such as communication among providers, encounter summaries which provide structured and unstructured data in context, are more useful than data elements
e.  Many data elements are not useful on their own and a module/collection of data elements would be better i.e. Allergies should include the substance, onset date, the type of reaction, the severity of the reaction, and the level of certainty of the reaction (your mother reported it based on a distant memory verses a clinician observed it happening).   To understand how best to collect data elements into modules, clinical data models would be very helpful.
f.  Since information is going to exchanged among multiple parties, metadata will need to include the provenance of the data, so that data is not duplicated multiple times i.e.  Hospital A sends data to Hospital B and C.    C requests a copy of B's data (which includes B and A) and it should be possible to avoid storing a duplicate of A's data which C already has.
f.  We should proceed with the health information exchange work already in progress to achieve interoperability in support of Meaningful Use stage 1 and not derail current efforts.
g.  Finely grained privacy (to the data element level) will be challenging to implement and maintain.  Tagging elements with privacy characteristics is very hard because societal attitudes about the sensitivity of data elements may change over time.  HIV testing used to be a rare event, so the presence of an HIV test alone (not its result) could be concerning.   Today  1/3 of Americans have had an HIV test, generally as part of getting life or health insurance, so the presence of a test is no longer a stigma.
h.  The national scope suggested includes using web search engine technology to keep a data element index, identifying what data is available for what patients and where.   The policy and security issues of doing this are greater than the technology challenges.

The next step for the PCAST report will be ONC's naming of a multi-stakeholder workgroup to review the report in detail and make recommendations by April.

We next heard about the planned Implementation Workgroup hearing regarding certification, Meaningful Use, and healthcare information exchange.  On January 10-11, the Workgroup will learn about early adopter successes and challenges.

Next, the Clinical Operations Workgroup reported on its plans to consider vocabulary and content issues for devices - critical care, implantable, and home care.   Issues include universal device identification, ensuring data integrity, and interoperability of devices that may require a clinical data model to ensure the meaning of data communicated is understood by EHRs, PHRs, and devices.

We next considered the standards and interoperability framework priorities as outlined by Doug Fridsma.   The S&I Frameworkcontractors are working on clinical summaries,  templates documents, Laboratory results, Medication Reconciliation, Provider  Directories, Syndromic Surveillance, Quality, Population Health, Clinical Decision Support, Patient Engagement, EHR to EHR data exchange, and Value Sets.

Points raised during this discussion included the need to include policy discussions throughout the process of harmonizing and testing standards.    We agreed that the Clinical Operations workgroup should study these priorities and make recommendations based on real world implementation experience that will help ONC and the contractors focus on the gaps to be addressed such as patient identification and vocabularies/codes sets.

We discussed the HIT Policy Committee's request for the Standards Committee to work on Certificate Management standards.  The Privacy and Security Workgroup will make recommendations for organization to organization and server to server certificate standards.

We next considered the Privacy and Security Workgroup's evaluation of NHIN Direct.  The Workgroup concluded that certificate exchange should not be limited to certificates stored in Domain Naming Services (DNS) applications.  It also suggested that XDR (a SOAP transaction) be removed from the NHIN Direct Core specification, reducing the complexity and optionality of the specification.    The only debate that arose during this discussion revolved around the issue of rejecting an NHIN Direct message because it did not meet regulatory requirements.  Specifically, the Privacy and Security Workgroup recommended the following language -

"Destinations MAY reject content that does not meet Destination expectations. For instance, some Destinations MAY require receipt of structured data, MAY support only particular content types, and MAY require receipt of XDM structured attachments."

Here's a use case that illustrates the issue:

Federal Regulations require quality measures to be sent in PQRI XML as of 2012.

A doctor uses NHIN Direct to send an unstructured text message to CMS "I achieved the quality measures you wanted me to!"

What should CMS do?
1.  Reject the message as not compliant with Federal regulations, notifying the sender as to the reason
2.  Accept the message, but contact the sender out of band to specify the requirements
3.  Accept the message, but later send a functional acknowledgement via NHIN Direct that the contents of the message did not qualify for meaningful use reporting requirements
etc.

In an email dialog following the HIT Standards Committee, many members agreed tat that the message should be rejected with an error message that the contents of the message did not meet regulatory requirements.

At the meeting, we agreed that decisions to reject or accept messages are a matter of policy and that the HIT Standards Committee should only recommend technology that enables messages to be sent securely and error messages to be provided to the message sender if policy requirements are not met.

A great meeting with significant progress on the PCAST review, S&I Framework review, and  the NHIN Direct review.  

Next month, we'll hear more about certificates, provider directories, and PCAST.  It's clear that the work of the Policy Committee on Certificates and Provider Directories, the work of NHIN Direct, and the work of HIT Standards Committee are converging such that we will soon have a unified approach to transport that will rapidly accelerate transmission of the standardized content and vocabularies already required by Meaningful Use.

Wednesday, December 15, 2010

Evil?

Mae West once said that "between two evils, I always pick the one I never tried before."

Scott Schumacher and I shared a panel at an ONC Privacy and Security Tiger Team hearing on patient matching (December 9th hearing).  There's some excellent testimony - well worth reading through.

We had a great exchange on the relative merits of linking versus merging which Scott summed up in a recent post: Why merging is evil.  "The criticism of the linked approach mostly concerns computation inefficiency. That is, if a user queries a particular member multiple times, the often-complex composite-view rules need to be executed each time. In the merge model, these rules are only executed once. In the merge model, we sometimes lose data and almost always lose the initial structure of the data. That is, we have lost information. In today’s world, information is precious. Losing it is evil."

One Tiger Team member commented that "There are possibly more evil things that can be done than a merge."  There are situations where data loss or loss of structure may introduce risks to safety or privacy.  But  I will set aside the use of "evil" in this regard as hyperbole.  

Scott's point is right as it relates to the linking systems for health information exchange (nation-wide exchange, this is the one, to channel Mae, that we've never tried before).  But it's far less obvious in connection with the linking systems inside of a care delivery organization.  In a health information exchange world, keeping records separate is good architecture and good practice.  The provider has no way of knowing and verifying the records of another provider.  The farther you move away from the source provider and the patient/person, the more important it is to maintain the source data intact.  Keeping records separate and linked for "Rich Elmore", "R Elmore", "Richard Elmore", and "Richard Ellmore" among distinct providers is critical to ensuring that the best possible rigor is applied at any point in time when the data is to be presented for analysis and use.

The Tiger Team members in their post-hearing discussion (MP3 audio on 12/10 from 1:02 - 1:06) made the argument that care providers will generally want to manually correct errors, not persist them, and would do so manually if  merge capabilities were not available.  They also pointed out that merging does not necessarily result in loss of the data structure - there are merging approaches such as creating a composite from the duplicate patient records that avoids this issue.

Ultimately, the Tiger Team concluded that this question (regarding the impact on the clinical records  downstream  from the patient linking system) is out of scope.  In other words, it's not a question of  "what you do downstream when you discover a possible link.  If you built a linking system that... lost all evidence that (two patients) were separate at one time, then that would be evil.  However, what you do within your medical records system with separate linkages or merged data - that's a separate issue."  

Some important related but unresolved questions were also posed by the Tiger Team members.  What is the physician's role in correcting the record when care has been provided based on an improperly merged (or linked for that matter) medical record?   And what are the responsibilities of these "downstream" systems related to handling errors in linking or  merging?

Patient matching and the downstream system implications are ever more important as health information exchange takes center stage.  Scott deserves credit for grabbing our attention with his "evil commentary".

Tuesday, December 14, 2010

Lumpers and splitters reconciled - Getting from complexity to simplicity

Ecologist Eric Berlow contends that complex doesn't equal complicated. Through the use of visualization tools on an ordered network, he shows how analytics applied to the natural world have application to other spheres. 3 minute presentation.

Monday, December 13, 2010

2010 ONC Update Meeting December 14-15


Portrait of Dr. Blumenthal
The Office of the National Coordinator for Health Information Technology (ONC), the Centers for Medicare & Medicaid Services (CMS), the Office for Civil Rights (OCR), and other HHS agencies are dedicated to improving the nation’s health care through health information technology (health IT).
Since the Health Information Technology for Economic and Clinical Health (HITECH) Act was signed into law in February 2009, we have established a number of initiatives that will help make it possible for providers to achieve meaningful use and for Americans to benefit from electronic health records as part of a modernized, interconnected, and vastly improved system of care delivery.
This year alone, we have established a number of important policies and programs to help lay the foundation for providers to begin their journey toward meaningful use. These include: 
It’s been a busy year for health IT at HHS.
We are looking forward to discussing more about all of our HITECH initiatives to date, as well as our future activities, at the upcoming 2010 ONC Update Meeting on December 14 and 15.
Over the course of this two-day meeting, we are offering a number of sessions that will give participants a better understanding of the HITECH regulations and the role that HITECH plays in health system change and health care reform. Some session topics include:
  • HITECH programs that support providers in achieving meaningful use
  • How HITECH initiatives will promote consumer empowerment and public engagement
  • Privacy and security policies
Our panelists and invited speakers include HHS Secretary Kathleen Sebelius and leaders from CDC, CMS, OCR, ONC and organizations who have a stake in our work. We are excited about the opportunity to share information and ideas.
The plenary sessions at this meeting will be streamed through a live webcast. Details about the webcast are available on the ONC website: http://healthit.hhs.gov/ONCMeeting2010.
Thank you in advance for joining us at the 2010 ONC Update Meeting and for supporting our vision of a higher quality, safer, and more efficient health care system enabled by health information technology.
Sincerely,
David Blumenthal, MD, MPP
National Coordinator for Health Information Technology

NIST EHR Usability Guide

NIST releases their guide to usability for Electronic Health Records applying the principles and processes of user-centered design and usability testing. ISO standards describe usability as "the effectiveness, efficiency, and satisfaction with which the intended users can achieve their tasks in the intended context of product use."

Effectiveness measures include "the percentage of tasks accomplished (success rate), the percentage of tasks achieved per unit of time, and the path or manner in which the tasks were achieved."

Efficiency is established by the time taken to complete each task.

Satisfaction is based on subjective measurement tools. Two referenced publicly available scales are the Software Usability Measurement Inventory (SUMI) and the System Usability Scale (SUS).

The guide includes a good framework for formative and summative usability testing

Barriers chart from Gans, et. al. 2005, Health Affairs.

NIST Guide to the Processes Approach for Improving the Usability of Electronic Health Records